Some low-level observations I have of Accelerationism (NOTE: I have not yet analyzed Accelerationism deeply and might do a really good job of this in the future, these should be taken as butterfly ideas):
They seem to be very focused on aesthetics when evaluating the future, rather than philosophy. This makes sense, since philosophy has a bad reputation for being saturated with obscurantist bullshit, there is lots of logically coherent stuff, like the theory of value, that precludes people from judging the future if they don’t have it. They can’t be realistically expected to have it, because a large proportion of professional philosophers themselves have their minds saturated with obscurantist bullshit.
Compared to the rest of society, Accelerationists can reasonably be considered “truly awakened” in their understanding of reality and what matters. This is because, given the assumption that smarter-than-human AI is safe/good, they actually are vastly better oriented to humanity’s current situation than >99% of people on earth; and AI safety is the only thing that challenges that assumption. Given that someone is an accelerationist, they must either be dimly aware of AI safety arguments, or see “doomers” as an outgroup that they are in a clan war with and whose arguments they must destroy. This makes them fundamentally similar to AI safety and EA themselves, aside from how AIS and EA have a foundation of research and e/acc has a foundation of narratives. Hotz seems to have realized some of this, and has decided to bridge the gap and test AI safety’s epistemics, while remaining a participant in the clan war.
It would be really interesting to see the origins of Accelerationist ideology; AI safety’s ideology’s origins is basically a ton of people going “holy crap, the future is real and important, and we can totally fuck it up and we should obviously do something about that”. It would be really notable if Accelerationist ideology was not merely an outgrowth of libertarianism, but was also heavily influenced by the expected profit motive, similar to cryptocurrency ideology (or, more significantly, was being spread on social media by bots maximizing for appeal, although that would be difficult to verify). That would potentially explain why Accelerationism is rather shallow in many different ways, and more significantly, the prospects for ending/preventing clan war dynamics between Accelerationism (an ideology that is extraordinarily friendly to top AI capabilities researchers) and AI safety.
There’s a big difference between philosophy and thinking about unlikely scenarios in the future that are very different from our world. In fact, those two things have little overlap. Although it’s not always clear, (I think) this discussion isn’t about aesthetics, or about philosophy, it’s about scenarios that are fairly simple to judge but have so many possible variations, and are so difficult to predict, that is seems pointless to even try. This feeling of futility is the parallel with philosophy, much of which just digests and distills questions into more questions, never giving an answer, until a question is no longer philosophy and can be answered by someone else.
The discussion is about whether or not human civilization will distroy itself due to negligence and lack of ability to cooperate. This risk may be real or imagined. You may care about future humans or not. But that doesn’t make this neither philosophy nor aesthetics. The questions are very concrete, not general, and they’re fairly objective (people agree a lot more on whether civilization is good than on what beauty is).
I really don’t know what you’re saying. To attack an obvious straw man and thus give you at least some starting point for explaining further: Generally, I’d be extremely sceptical of any claim about some tiny coherent group of people understanding something important better than 99% of humans on earth. To put it polemically, for most such claims, either it’s not really important (maybe we don’t really know if it is?), it won’t stay that way for long, or you’re advertising for a cult. The phrase “truly awakened” doesn’t bode well here… Feel free to explain what you actually meant rather than responding to this.
Assuming these “ideologies” you speak of really exist in a coherent fashion, I’d try to summarize “Accelerationist ideology” as saying: “technological advancement (including AI) will accelerate a lot, change the world in unimaginable ways and be great, let’s do that as quickly as possible”, while “AI safety (LW version)” as saying “it might go wrong and be catastrophic/unrecoverable; let’s be very careful”. If anything, these ideas as ideologies are yet to get out into the world and might never have any meaningful impact at all. They might not even work on their own as ideologies (maybe we mean different things by that word).
So why are the origins interesting? What do you hope to learn from them? What does it matter if one of those is an “outgrowth” of one thing more than some other? It’s very hard for me to evaluate something like how “shallow” they are. It’s not like there’s some single manifesto or something. I don’t see how that’s a fruitful direction to think about.
Some low-level observations I have of Accelerationism (NOTE: I have not yet analyzed Accelerationism deeply and might do a really good job of this in the future, these should be taken as butterfly ideas):
They seem to be very focused on aesthetics when evaluating the future, rather than philosophy. This makes sense, since philosophy has a bad reputation for being saturated with obscurantist bullshit, there is lots of logically coherent stuff, like the theory of value, that precludes people from judging the future if they don’t have it. They can’t be realistically expected to have it, because a large proportion of professional philosophers themselves have their minds saturated with obscurantist bullshit.
Compared to the rest of society, Accelerationists can reasonably be considered “truly awakened” in their understanding of reality and what matters. This is because, given the assumption that smarter-than-human AI is safe/good, they actually are vastly better oriented to humanity’s current situation than >99% of people on earth; and AI safety is the only thing that challenges that assumption. Given that someone is an accelerationist, they must either be dimly aware of AI safety arguments, or see “doomers” as an outgroup that they are in a clan war with and whose arguments they must destroy. This makes them fundamentally similar to AI safety and EA themselves, aside from how AIS and EA have a foundation of research and e/acc has a foundation of narratives. Hotz seems to have realized some of this, and has decided to bridge the gap and test AI safety’s epistemics, while remaining a participant in the clan war.
It would be really interesting to see the origins of Accelerationist ideology; AI safety’s ideology’s origins is basically a ton of people going “holy crap, the future is real and important, and we can totally fuck it up and we should obviously do something about that”. It would be really notable if Accelerationist ideology was not merely an outgrowth of libertarianism, but was also heavily influenced by the expected profit motive, similar to cryptocurrency ideology (or, more significantly, was being spread on social media by bots maximizing for appeal, although that would be difficult to verify). That would potentially explain why Accelerationism is rather shallow in many different ways, and more significantly, the prospects for ending/preventing clan war dynamics between Accelerationism (an ideology that is extraordinarily friendly to top AI capabilities researchers) and AI safety.
There’s a big difference between philosophy and thinking about unlikely scenarios in the future that are very different from our world. In fact, those two things have little overlap. Although it’s not always clear, (I think) this discussion isn’t about aesthetics, or about philosophy, it’s about scenarios that are fairly simple to judge but have so many possible variations, and are so difficult to predict, that is seems pointless to even try. This feeling of futility is the parallel with philosophy, much of which just digests and distills questions into more questions, never giving an answer, until a question is no longer philosophy and can be answered by someone else.
The discussion is about whether or not human civilization will distroy itself due to negligence and lack of ability to cooperate. This risk may be real or imagined. You may care about future humans or not. But that doesn’t make this neither philosophy nor aesthetics. The questions are very concrete, not general, and they’re fairly objective (people agree a lot more on whether civilization is good than on what beauty is).
I really don’t know what you’re saying. To attack an obvious straw man and thus give you at least some starting point for explaining further: Generally, I’d be extremely sceptical of any claim about some tiny coherent group of people understanding something important better than 99% of humans on earth. To put it polemically, for most such claims, either it’s not really important (maybe we don’t really know if it is?), it won’t stay that way for long, or you’re advertising for a cult. The phrase “truly awakened” doesn’t bode well here… Feel free to explain what you actually meant rather than responding to this.
Assuming these “ideologies” you speak of really exist in a coherent fashion, I’d try to summarize “Accelerationist ideology” as saying: “technological advancement (including AI) will accelerate a lot, change the world in unimaginable ways and be great, let’s do that as quickly as possible”, while “AI safety (LW version)” as saying “it might go wrong and be catastrophic/unrecoverable; let’s be very careful”. If anything, these ideas as ideologies are yet to get out into the world and might never have any meaningful impact at all. They might not even work on their own as ideologies (maybe we mean different things by that word).
So why are the origins interesting? What do you hope to learn from them? What does it matter if one of those is an “outgrowth” of one thing more than some other? It’s very hard for me to evaluate something like how “shallow” they are. It’s not like there’s some single manifesto or something. I don’t see how that’s a fruitful direction to think about.