I think not enforcing an “in or out” boundary is big contributor to this degradation—like, majorly successful religions required all kinds of sacrifice.
I think I’m reasonably Catholic, even though I don’t know anything about the living Catholic leaders.
I think being a Catholic with no connection to living leaders makes more sense than being an EA who doesn’t have a leader they trust and respect, because Catholicism has a longer tradition, and you can work within that. On the other hand… I wouldn’t say this to most people, but my model is you’d prefer I be this blunt… my understanding is Catholicism is about submission to the hierarchy, and if you’re not doing that or don’t actively believe they are worthy of that, you’re LARPing. I don’t think this is true of (most?) protestant denominations: working from books and a direct line to God is their jam. But Catholicism cares much more about authority and authorization.
It feels like AI safety is the best current candidate for [lifeboat], though that is also much less cohesive and not a direct successor for a bunch of ways. I too have been lately wondering what “Post EA” looks like.
I’d love for this to be true because I think AIS is EA’s most important topic. OTOH, I think AIS might have been what poisoned EA? The global development people seem much more grounded (to this day), and AFAIK the ponzi scheme recruiting is all aimed at AIS and meta (which is more AIS). ETG was a much more viable role for GD than for AIS.
I think AIS might have been what poisoned EA? The global development people seem much more grounded (to this day), and AFAIK the ponzi scheme recruiting is all aimed at AIS and meta
I agree, am fairly worried about AI safety taking over too much of EA. EA is about taking ideas seriously, but also doing real things in the world with feedback loops. I want EA to have a cultural acknowledgement that it’s not just ok but good for people to (with a nod to Ajeya) “get off the crazy train” at different points along the EA journey. We currently have too many people taking it all the way into AI town. I again don’t know what to do to fix it.
I think it’s good to want to have moderating impulses on people doing extreme things to fit in. But insofar as you’re saying that believing ‘AI is an existential threat to our civilization’ is ‘crazy town’, I don’t really know what to say. I don’t believe it’s crazy town, and I don’t think that thinking it’s crazy town is a reasonable position. Civilization is investing billions of dollars into growing AI systems that we don’t understand and they’re getting more capable by the month. They talk and beat us at Go and speed up our code significantly. This is just the start, companies are raising massive amounts of money to scale these systems.
I worry you’re caught up worrying what people might’ve thought about you thinking that ten years ago. Not only is this idea now well within the overton window, my sense is that people saying it’s ‘crazy town’ either haven’t engaged with the arguments (e.g.) or are somehow throwing their own ability to do basic reasoning out of the window.
Added: I recognize it’s rude to suggest any psychologizing here but I read the thing you wrote as saying that the thing I expect to kill me and everyone I love doesn’t exist and I’m crazy for thinking it, and so I’m naturally a bit scared by you asserting it as though it’s the default and correct position.
(Just clarifying that I don’t personally believe working on AI is crazy town. I’m quoting a thing that made an impact on me awhile back and I still think is relevant culturally for the EA movement.)
I think feedback loops are good, but how is that incompatible with taking AI seriously? At this point, even if you want to work on things with tighter feedback loops, AI seems like the central game in town (probably by developing technology that leverages it, while thinking carefully about the indirect effects of that, or at the very least, by being in touch with how it will affect whatever other problem you are trying to solve, since it will probably affect all of them).
This is a good point. In my ideal movement makes perfect sense to disagree with every leader and yet still be a central member of the group. LessWrong has basically pulled that off. EA somehow managed to be bad at having leaders (both in the sense that the closest things to leaders don’t want to be closer, and that I don’t respect them), while being the sort of thing that requires leaders.
I think being a Catholic with no connection to living leaders makes more sense than being an EA who doesn’t have a leader they trust and respect, because Catholicism has a longer tradition
As an additional comment, few organizations have splintered more publicly than Catholicism; it seems sort of surreal to me to not check whether or not you ended up on the right side of the splintering. [This is probably more about theological questions than it is about leadership, but as you say, the leadership is relevant!]
I feel ambivalent about this. On one hand, yes, you need to have standards, and I think EA’s move towards big-tentism degraded it significantly. On the other hand I think having sharp inclusion functions are bad for people in a movement[1], cut the movement off from useful work done outside itself, selects for people searching for validation and belonging, and selects against thoughtful people with other options.
I think being a Catholic with no connection to living leaders makes more sense than being an EA who doesn’t have a leader they trust and respect, because Catholicism has a longer tradition, and you can work within that. On the other hand… I wouldn’t say this to most people, but my model is you’d prefer I be this blunt… my understanding is Catholicism is about submission to the hierarchy, and if you’re not doing that or don’t actively believe they are worthy of that, you’re LARPing. I don’t think this is true of (most?) protestant denominations: working from books and a direct line to God is their jam. But Catholicism cares much more about authority and authorization.
I’d love for this to be true because I think AIS is EA’s most important topic. OTOH, I think AIS might have been what poisoned EA? The global development people seem much more grounded (to this day), and AFAIK the ponzi scheme recruiting is all aimed at AIS and meta (which is more AIS). ETG was a much more viable role for GD than for AIS.
If you’re only as good as your last 3 months, no one can take time to rest and reflect, much less recover from burnout.
I agree, am fairly worried about AI safety taking over too much of EA. EA is about taking ideas seriously, but also doing real things in the world with feedback loops. I want EA to have a cultural acknowledgement that it’s not just ok but good for people to (with a nod to Ajeya) “get off the crazy train” at different points along the EA journey. We currently have too many people taking it all the way into AI town. I again don’t know what to do to fix it.
I think it’s good to want to have moderating impulses on people doing extreme things to fit in. But insofar as you’re saying that believing ‘AI is an existential threat to our civilization’ is ‘crazy town’, I don’t really know what to say. I don’t believe it’s crazy town, and I don’t think that thinking it’s crazy town is a reasonable position. Civilization is investing billions of dollars into growing AI systems that we don’t understand and they’re getting more capable by the month. They talk and beat us at Go and speed up our code significantly. This is just the start, companies are raising massive amounts of money to scale these systems.
I worry you’re caught up worrying what people might’ve thought about you thinking that ten years ago. Not only is this idea now well within the overton window, my sense is that people saying it’s ‘crazy town’ either haven’t engaged with the arguments (e.g.) or are somehow throwing their own ability to do basic reasoning out of the window.
Added: I recognize it’s rude to suggest any psychologizing here but I read the thing you wrote as saying that the thing I expect to kill me and everyone I love doesn’t exist and I’m crazy for thinking it, and so I’m naturally a bit scared by you asserting it as though it’s the default and correct position.
(Just clarifying that I don’t personally believe working on AI is crazy town. I’m quoting a thing that made an impact on me awhile back and I still think is relevant culturally for the EA movement.)
I reject the implication that AI town is the last stop on the crazy train.
I think feedback loops are good, but how is that incompatible with taking AI seriously? At this point, even if you want to work on things with tighter feedback loops, AI seems like the central game in town (probably by developing technology that leverages it, while thinking carefully about the indirect effects of that, or at the very least, by being in touch with how it will affect whatever other problem you are trying to solve, since it will probably affect all of them).
Catholic EA: You have a leader you trust and respect, and defer to their judgement.
Sola Fide EA: You read 80k hours and Givewell, but you keep your own spreadsheet of EV calculations.
This is a good point. In my ideal movement makes perfect sense to disagree with every leader and yet still be a central member of the group. LessWrong has basically pulled that off. EA somehow managed to be bad at having leaders (both in the sense that the closest things to leaders don’t want to be closer, and that I don’t respect them), while being the sort of thing that requires leaders.
As an additional comment, few organizations have splintered more publicly than Catholicism; it seems sort of surreal to me to not check whether or not you ended up on the right side of the splintering. [This is probably more about theological questions than it is about leadership, but as you say, the leadership is relevant!]