I think AIS might have been what poisoned EA? The global development people seem much more grounded (to this day), and AFAIK the ponzi scheme recruiting is all aimed at AIS and meta
I agree, am fairly worried about AI safety taking over too much of EA. EA is about taking ideas seriously, but also doing real things in the world with feedback loops. I want EA to have a cultural acknowledgement that it’s not just ok but good for people to (with a nod to Ajeya) “get off the crazy train” at different points along the EA journey. We currently have too many people taking it all the way into AI town. I again don’t know what to do to fix it.
I think it’s good to want to have moderating impulses on people doing extreme things to fit in. But insofar as you’re saying that believing ‘AI is an existential threat to our civilization’ is ‘crazy town’, I don’t really know what to say. I don’t believe it’s crazy town, and I don’t think that thinking it’s crazy town is a reasonable position. Civilization is investing billions of dollars into growing AI systems that we don’t understand and they’re getting more capable by the month. They talk and beat us at Go and speed up our code significantly. This is just the start, companies are raising massive amounts of money to scale these systems.
I worry you’re caught up worrying what people might’ve thought about you thinking that ten years ago. Not only is this idea now well within the overton window, my sense is that people saying it’s ‘crazy town’ either haven’t engaged with the arguments (e.g.) or are somehow throwing their own ability to do basic reasoning out of the window.
Added: I recognize it’s rude to suggest any psychologizing here but I read the thing you wrote as saying that the thing I expect to kill me and everyone I love doesn’t exist and I’m crazy for thinking it, and so I’m naturally a bit scared by you asserting it as though it’s the default and correct position.
(Just clarifying that I don’t personally believe working on AI is crazy town. I’m quoting a thing that made an impact on me awhile back and I still think is relevant culturally for the EA movement.)
I think feedback loops are good, but how is that incompatible with taking AI seriously? At this point, even if you want to work on things with tighter feedback loops, AI seems like the central game in town (probably by developing technology that leverages it, while thinking carefully about the indirect effects of that, or at the very least, by being in touch with how it will affect whatever other problem you are trying to solve, since it will probably affect all of them).
I agree, am fairly worried about AI safety taking over too much of EA. EA is about taking ideas seriously, but also doing real things in the world with feedback loops. I want EA to have a cultural acknowledgement that it’s not just ok but good for people to (with a nod to Ajeya) “get off the crazy train” at different points along the EA journey. We currently have too many people taking it all the way into AI town. I again don’t know what to do to fix it.
I think it’s good to want to have moderating impulses on people doing extreme things to fit in. But insofar as you’re saying that believing ‘AI is an existential threat to our civilization’ is ‘crazy town’, I don’t really know what to say. I don’t believe it’s crazy town, and I don’t think that thinking it’s crazy town is a reasonable position. Civilization is investing billions of dollars into growing AI systems that we don’t understand and they’re getting more capable by the month. They talk and beat us at Go and speed up our code significantly. This is just the start, companies are raising massive amounts of money to scale these systems.
I worry you’re caught up worrying what people might’ve thought about you thinking that ten years ago. Not only is this idea now well within the overton window, my sense is that people saying it’s ‘crazy town’ either haven’t engaged with the arguments (e.g.) or are somehow throwing their own ability to do basic reasoning out of the window.
Added: I recognize it’s rude to suggest any psychologizing here but I read the thing you wrote as saying that the thing I expect to kill me and everyone I love doesn’t exist and I’m crazy for thinking it, and so I’m naturally a bit scared by you asserting it as though it’s the default and correct position.
(Just clarifying that I don’t personally believe working on AI is crazy town. I’m quoting a thing that made an impact on me awhile back and I still think is relevant culturally for the EA movement.)
I reject the implication that AI town is the last stop on the crazy train.
I think feedback loops are good, but how is that incompatible with taking AI seriously? At this point, even if you want to work on things with tighter feedback loops, AI seems like the central game in town (probably by developing technology that leverages it, while thinking carefully about the indirect effects of that, or at the very least, by being in touch with how it will affect whatever other problem you are trying to solve, since it will probably affect all of them).