Yeah, I really like this idea—at least in principle. The idea of looking for value agreement and where do our maps (that likely are verbally extremely different) match is something that I think we don’t do nearly enough.
To get at what worries me about some of the ‘EA needs to consider other viewpoints discourse’ (and not at all about what you just wrote, let me describe two positions:
EA needs to get better at communicating with non EA people, and seeing the ways that they have important information, and often know things we do not, even if they speak in ways that we find hard to match up with concepts like ‘bayesian updates’ or ‘expected value’ or even ‘cost effectiveness’.
EA needs to become less elitist, nerdy, jargon laden and weird so that it can have a bigger impact on the broader world.
I fully embrace 1, subject to constraints about how sometimes it is too expensive to translate an idea into a discourse we are good at understanding, how sometimes we have weird infohazard type edge cases and the like.
2 though strikes me as extremely dangerous.
To make a metaphor: Coffee is not the only type of good drink, it is bitter and filled with psychoactive substances that give some people heart palpitations. That does not mean it would be a good idea to dilute coffee with apple juice so that it can appeal to people who don’t like the taste of coffee and are caffeine sensitive.
The EA community is the EA community, and it currently works (to some extent), and it currently is doing important and influential work. Part of what makes it work as a community is the unifying effect of having our own weird cultural touchstones and documents. The barrier of excluisivity created by the jargon and the elitism, and the fact that it is one of the few spaces where the majority of people are explicit utilitarians is part of what makes it able to succeed (to the extent it does).
My intuition is that an EA without all of these features wouldn’t be a more accessible and open community that is able to do more good in the world. My intuition is an EA without those features would be a dead community where everyone has gone on to other interests and that therefore does no good at all.
Obviously there is a middle ground—shifts in the culture of the community that improve our pareto frontier of openness and accessibility while maintaing community cohesion and appeal.
However, I don’t think this worry is what you actually were talking about. I think you really were focusing on us having cognitive blindspots, which is obviously true, and important.
Well-written! Most of this definitely resonates for me.
Quick thoughts:
Some of the jargon I’ve heard sounded plain silly from a making-intellectual-progress-perspective (not just implicit aggrandising). Makes it harder to share our reasoning, even to each other, in a comprehensible, high-fidelity way. I like Rob Wiblin’s guide on jargon.
Perhaps we put too much emphasis on making explicit communication comprehensible. Might be more fruitful to find ways to recognise how particular communities are set up to be good at understanding or making progress in particular problem niches, even if we struggle to comprehend what they’re specifically saying or doing.
(I was skeptical about the claim ‘majority of people are explicit utilitarians’ – i.e. utilitarian not just consequentialist or some pluralistic mix of moral views – but EA Survey responses seems to back it up: ~70% utilitarian)
Yeah, I really like this idea—at least in principle. The idea of looking for value agreement and where do our maps (that likely are verbally extremely different) match is something that I think we don’t do nearly enough.
To get at what worries me about some of the ‘EA needs to consider other viewpoints discourse’ (and not at all about what you just wrote, let me describe two positions:
EA needs to get better at communicating with non EA people, and seeing the ways that they have important information, and often know things we do not, even if they speak in ways that we find hard to match up with concepts like ‘bayesian updates’ or ‘expected value’ or even ‘cost effectiveness’.
EA needs to become less elitist, nerdy, jargon laden and weird so that it can have a bigger impact on the broader world.
I fully embrace 1, subject to constraints about how sometimes it is too expensive to translate an idea into a discourse we are good at understanding, how sometimes we have weird infohazard type edge cases and the like.
2 though strikes me as extremely dangerous.
To make a metaphor: Coffee is not the only type of good drink, it is bitter and filled with psychoactive substances that give some people heart palpitations. That does not mean it would be a good idea to dilute coffee with apple juice so that it can appeal to people who don’t like the taste of coffee and are caffeine sensitive.
The EA community is the EA community, and it currently works (to some extent), and it currently is doing important and influential work. Part of what makes it work as a community is the unifying effect of having our own weird cultural touchstones and documents. The barrier of excluisivity created by the jargon and the elitism, and the fact that it is one of the few spaces where the majority of people are explicit utilitarians is part of what makes it able to succeed (to the extent it does).
My intuition is that an EA without all of these features wouldn’t be a more accessible and open community that is able to do more good in the world. My intuition is an EA without those features would be a dead community where everyone has gone on to other interests and that therefore does no good at all.
Obviously there is a middle ground—shifts in the culture of the community that improve our pareto frontier of openness and accessibility while maintaing community cohesion and appeal.
However, I don’t think this worry is what you actually were talking about. I think you really were focusing on us having cognitive blindspots, which is obviously true, and important.
Well-written! Most of this definitely resonates for me.
Quick thoughts:
Some of the jargon I’ve heard sounded plain silly from a making-intellectual-progress-perspective (not just implicit aggrandising). Makes it harder to share our reasoning, even to each other, in a comprehensible, high-fidelity way. I like Rob Wiblin’s guide on jargon.
Perhaps we put too much emphasis on making explicit communication comprehensible. Might be more fruitful to find ways to recognise how particular communities are set up to be good at understanding or making progress in particular problem niches, even if we struggle to comprehend what they’re specifically saying or doing.
(I was skeptical about the claim ‘majority of people are explicit utilitarians’ – i.e. utilitarian not just consequentialist or some pluralistic mix of moral views – but EA Survey responses seems to back it up: ~70% utilitarian)