This seems to be neither about AI nor about doom. It’s about LLMs accelerating some human trends that are unpleasant.
I actually agree that this is a bigger threat to short-term widescale human happiness than actual AI doom, but I don’t want to mix up the two topics.
It’s not directly about AGI, no. But it could be a way to change a skeptic’s mind about AI risk. Which could be useful if they’re a regulator/politician.
This seems to be neither about AI nor about doom. It’s about LLMs accelerating some human trends that are unpleasant.
I actually agree that this is a bigger threat to short-term widescale human happiness than actual AI doom, but I don’t want to mix up the two topics.
It’s not directly about AGI, no. But it could be a way to change a skeptic’s mind about AI risk. Which could be useful if they’re a regulator/politician.