Yeah, seems awesome for us to figure out where we fit within that global portfolio! Especially in policy efforts, that could enable us to build a more accurate and broadly reflective consensus to help centralised institutions improve on larger-scale decisions they make (see a general case for not channeling our current efforts towards making EA the dominant approach to decision-making).
To clarify, I hope this post helps readers become more aware of their brightspots (vs. blindspots) that they might hold in common with like-minded collaborators – ie. areas they notice (vs. miss) that map to relevant aspects of the underlying territory.
I’m trying to encourage myself and the friends I collaborate with to build up an understanding of alternative approaches that outside groups take up (ie. to map and navigate their surrounding environment), and where those approaches might complement ours. Not necessarily for us to take up more simultaneous mental styles or to widen our mental focus or areas of specialisation. But to be able to hold outside groups’ views so we get roughly where they are coming from, can communicate from their perspective, and form mutually beneficial partnerships.
More fundamentally, as human apes, our senses are exposed to an environment that is much more complex than just us. So we don’t have the capacity to process our surroundings fully, nor to perceive all the relevant underlying aspects at once. To map the environment we are embedded in, we need robust constraints for encoding moment-to-moment observations, through layers of inductive biases, into stable representations.
Different-minded groups end up with different maps. But in order to learn from outside critics of EA, we need to be able to line up our map better with theirs.
Let me throw an excerpt from an intro draft on the tool I’m developing. Curious for your thoughts!
Take two principles for a collaborative conversation in LessWrong and Effective Altruism:
Your map is not the territory: Your interlocutor may have surveyed a part of the bigger environment that you haven’t seen yet. Selfishly ask for their map, line up the pieces of their map with your map, and combine them to more accurately reflect the underlying territory.
Seek alignment: Rewards can be hacked. Find a collaborator whose values align with your values so you can rely on them to make progress on the problems you care about.
When your interlocutor happens to have a compatible map and aligned values, such principles will guide you to learn missing information and collaborate smoothly.
On the flipside, you will hit a dead end in your new conversation when:
you can’t line up their map with yours to form a shared understanding of the territory. Eg. you find their arguments inscrutable.
you don’t converge on shared overarching aims for navigating the territory. Eg. double cruxes tend to bottom out at value disagreements.
You can resolve that tension with a mental shortcut: When you get confused about what they mean and fundamentally disagree on what they find important, just get out of their way. Why sink more of your time into a conversation that doesn’t reveal any new insights to you? Why risk fuelling a conflict?
This makes sense, and also omits a deeper question: why can’t you grasp their perspective? Maybe they don’t think the same things through as rigorously as you, and you pick up on that. Maybe they dishonestly express their beliefs or preferences, and you pick up on that. Maybe they honestly shared insights that you failed to pick up on.
Underlying each word you exchange is your perception of the surrounding territory … A word’s common definition masks our perceptual divide. Say you and I both look at the same thing and agree which defined term describes it. Then, we can mention this term as a pointer to what we both saw. Yet, the environment I perceive that I point the term to may be very different from the environment you perceive.
Different-minded people can illuminate our blindspots. Across the areas they chart and the paths they navigate lie nuggets – aspects of reality we don’t even know yet that we will come to care about.
Yeah, I really like this idea—at least in principle. The idea of looking for value agreement and where do our maps (that likely are verbally extremely different) match is something that I think we don’t do nearly enough.
To get at what worries me about some of the ‘EA needs to consider other viewpoints discourse’ (and not at all about what you just wrote, let me describe two positions:
EA needs to get better at communicating with non EA people, and seeing the ways that they have important information, and often know things we do not, even if they speak in ways that we find hard to match up with concepts like ‘bayesian updates’ or ‘expected value’ or even ‘cost effectiveness’.
EA needs to become less elitist, nerdy, jargon laden and weird so that it can have a bigger impact on the broader world.
I fully embrace 1, subject to constraints about how sometimes it is too expensive to translate an idea into a discourse we are good at understanding, how sometimes we have weird infohazard type edge cases and the like.
2 though strikes me as extremely dangerous.
To make a metaphor: Coffee is not the only type of good drink, it is bitter and filled with psychoactive substances that give some people heart palpitations. That does not mean it would be a good idea to dilute coffee with apple juice so that it can appeal to people who don’t like the taste of coffee and are caffeine sensitive.
The EA community is the EA community, and it currently works (to some extent), and it currently is doing important and influential work. Part of what makes it work as a community is the unifying effect of having our own weird cultural touchstones and documents. The barrier of excluisivity created by the jargon and the elitism, and the fact that it is one of the few spaces where the majority of people are explicit utilitarians is part of what makes it able to succeed (to the extent it does).
My intuition is that an EA without all of these features wouldn’t be a more accessible and open community that is able to do more good in the world. My intuition is an EA without those features would be a dead community where everyone has gone on to other interests and that therefore does no good at all.
Obviously there is a middle ground—shifts in the culture of the community that improve our pareto frontier of openness and accessibility while maintaing community cohesion and appeal.
However, I don’t think this worry is what you actually were talking about. I think you really were focusing on us having cognitive blindspots, which is obviously true, and important.
Well-written! Most of this definitely resonates for me.
Quick thoughts:
Some of the jargon I’ve heard sounded plain silly from a making-intellectual-progress-perspective (not just implicit aggrandising). Makes it harder to share our reasoning, even to each other, in a comprehensible, high-fidelity way. I like Rob Wiblin’s guide on jargon.
Perhaps we put too much emphasis on making explicit communication comprehensible. Might be more fruitful to find ways to recognise how particular communities are set up to be good at understanding or making progress in particular problem niches, even if we struggle to comprehend what they’re specifically saying or doing.
(I was skeptical about the claim ‘majority of people are explicit utilitarians’ – i.e. utilitarian not just consequentialist or some pluralistic mix of moral views – but EA Survey responses seems to back it up: ~70% utilitarian)
Yeah, seems awesome for us to figure out where we fit within that global portfolio! Especially in policy efforts, that could enable us to build a more accurate and broadly reflective consensus to help centralised institutions improve on larger-scale decisions they make (see a general case for not channeling our current efforts towards making EA the dominant approach to decision-making).
To clarify, I hope this post helps readers become more aware of their brightspots (vs. blindspots) that they might hold in common with like-minded collaborators – ie. areas they notice (vs. miss) that map to relevant aspects of the underlying territory.
I’m trying to encourage myself and the friends I collaborate with to build up an understanding of alternative approaches that outside groups take up (ie. to map and navigate their surrounding environment), and where those approaches might complement ours. Not necessarily for us to take up more simultaneous mental styles or to widen our mental focus or areas of specialisation. But to be able to hold outside groups’ views so we get roughly where they are coming from, can communicate from their perspective, and form mutually beneficial partnerships.
More fundamentally, as human apes, our senses are exposed to an environment that is much more complex than just us. So we don’t have the capacity to process our surroundings fully, nor to perceive all the relevant underlying aspects at once. To map the environment we are embedded in, we need robust constraints for encoding moment-to-moment observations, through layers of inductive biases, into stable representations.
Different-minded groups end up with different maps. But in order to learn from outside critics of EA, we need to be able to line up our map better with theirs.
Let me throw an excerpt from an intro draft on the tool I’m developing. Curious for your thoughts!
Yeah, I really like this idea—at least in principle. The idea of looking for value agreement and where do our maps (that likely are verbally extremely different) match is something that I think we don’t do nearly enough.
To get at what worries me about some of the ‘EA needs to consider other viewpoints discourse’ (and not at all about what you just wrote, let me describe two positions:
EA needs to get better at communicating with non EA people, and seeing the ways that they have important information, and often know things we do not, even if they speak in ways that we find hard to match up with concepts like ‘bayesian updates’ or ‘expected value’ or even ‘cost effectiveness’.
EA needs to become less elitist, nerdy, jargon laden and weird so that it can have a bigger impact on the broader world.
I fully embrace 1, subject to constraints about how sometimes it is too expensive to translate an idea into a discourse we are good at understanding, how sometimes we have weird infohazard type edge cases and the like.
2 though strikes me as extremely dangerous.
To make a metaphor: Coffee is not the only type of good drink, it is bitter and filled with psychoactive substances that give some people heart palpitations. That does not mean it would be a good idea to dilute coffee with apple juice so that it can appeal to people who don’t like the taste of coffee and are caffeine sensitive.
The EA community is the EA community, and it currently works (to some extent), and it currently is doing important and influential work. Part of what makes it work as a community is the unifying effect of having our own weird cultural touchstones and documents. The barrier of excluisivity created by the jargon and the elitism, and the fact that it is one of the few spaces where the majority of people are explicit utilitarians is part of what makes it able to succeed (to the extent it does).
My intuition is that an EA without all of these features wouldn’t be a more accessible and open community that is able to do more good in the world. My intuition is an EA without those features would be a dead community where everyone has gone on to other interests and that therefore does no good at all.
Obviously there is a middle ground—shifts in the culture of the community that improve our pareto frontier of openness and accessibility while maintaing community cohesion and appeal.
However, I don’t think this worry is what you actually were talking about. I think you really were focusing on us having cognitive blindspots, which is obviously true, and important.
Well-written! Most of this definitely resonates for me.
Quick thoughts:
Some of the jargon I’ve heard sounded plain silly from a making-intellectual-progress-perspective (not just implicit aggrandising). Makes it harder to share our reasoning, even to each other, in a comprehensible, high-fidelity way. I like Rob Wiblin’s guide on jargon.
Perhaps we put too much emphasis on making explicit communication comprehensible. Might be more fruitful to find ways to recognise how particular communities are set up to be good at understanding or making progress in particular problem niches, even if we struggle to comprehend what they’re specifically saying or doing.
(I was skeptical about the claim ‘majority of people are explicit utilitarians’ – i.e. utilitarian not just consequentialist or some pluralistic mix of moral views – but EA Survey responses seems to back it up: ~70% utilitarian)