There’s a big difference between philosophy and thinking about unlikely scenarios in the future that are very different from our world. In fact, those two things have little overlap. Although it’s not always clear, (I think) this discussion isn’t about aesthetics, or about philosophy, it’s about scenarios that are fairly simple to judge but have so many possible variations, and are so difficult to predict, that is seems pointless to even try. This feeling of futility is the parallel with philosophy, much of which just digests and distills questions into more questions, never giving an answer, until a question is no longer philosophy and can be answered by someone else.
The discussion is about whether or not human civilization will distroy itself due to negligence and lack of ability to cooperate. This risk may be real or imagined. You may care about future humans or not. But that doesn’t make this neither philosophy nor aesthetics. The questions are very concrete, not general, and they’re fairly objective (people agree a lot more on whether civilization is good than on what beauty is).
I really don’t know what you’re saying. To attack an obvious straw man and thus give you at least some starting point for explaining further: Generally, I’d be extremely sceptical of any claim about some tiny coherent group of people understanding something important better than 99% of humans on earth. To put it polemically, for most such claims, either it’s not really important (maybe we don’t really know if it is?), it won’t stay that way for long, or you’re advertising for a cult. The phrase “truly awakened” doesn’t bode well here… Feel free to explain what you actually meant rather than responding to this.
Assuming these “ideologies” you speak of really exist in a coherent fashion, I’d try to summarize “Accelerationist ideology” as saying: “technological advancement (including AI) will accelerate a lot, change the world in unimaginable ways and be great, let’s do that as quickly as possible”, while “AI safety (LW version)” as saying “it might go wrong and be catastrophic/unrecoverable; let’s be very careful”. If anything, these ideas as ideologies are yet to get out into the world and might never have any meaningful impact at all. They might not even work on their own as ideologies (maybe we mean different things by that word).
So why are the origins interesting? What do you hope to learn from them? What does it matter if one of those is an “outgrowth” of one thing more than some other? It’s very hard for me to evaluate something like how “shallow” they are. It’s not like there’s some single manifesto or something. I don’t see how that’s a fruitful direction to think about.
There’s a big difference between philosophy and thinking about unlikely scenarios in the future that are very different from our world. In fact, those two things have little overlap. Although it’s not always clear, (I think) this discussion isn’t about aesthetics, or about philosophy, it’s about scenarios that are fairly simple to judge but have so many possible variations, and are so difficult to predict, that is seems pointless to even try. This feeling of futility is the parallel with philosophy, much of which just digests and distills questions into more questions, never giving an answer, until a question is no longer philosophy and can be answered by someone else.
The discussion is about whether or not human civilization will distroy itself due to negligence and lack of ability to cooperate. This risk may be real or imagined. You may care about future humans or not. But that doesn’t make this neither philosophy nor aesthetics. The questions are very concrete, not general, and they’re fairly objective (people agree a lot more on whether civilization is good than on what beauty is).
I really don’t know what you’re saying. To attack an obvious straw man and thus give you at least some starting point for explaining further: Generally, I’d be extremely sceptical of any claim about some tiny coherent group of people understanding something important better than 99% of humans on earth. To put it polemically, for most such claims, either it’s not really important (maybe we don’t really know if it is?), it won’t stay that way for long, or you’re advertising for a cult. The phrase “truly awakened” doesn’t bode well here… Feel free to explain what you actually meant rather than responding to this.
Assuming these “ideologies” you speak of really exist in a coherent fashion, I’d try to summarize “Accelerationist ideology” as saying: “technological advancement (including AI) will accelerate a lot, change the world in unimaginable ways and be great, let’s do that as quickly as possible”, while “AI safety (LW version)” as saying “it might go wrong and be catastrophic/unrecoverable; let’s be very careful”. If anything, these ideas as ideologies are yet to get out into the world and might never have any meaningful impact at all. They might not even work on their own as ideologies (maybe we mean different things by that word).
So why are the origins interesting? What do you hope to learn from them? What does it matter if one of those is an “outgrowth” of one thing more than some other? It’s very hard for me to evaluate something like how “shallow” they are. It’s not like there’s some single manifesto or something. I don’t see how that’s a fruitful direction to think about.