a superintelligence will be at least several orders of magnitude more persuasive than character.ai or Stuart Armstrong.
Believing this seems central to believing high P(doom).
But, I think it’s not a coherent enough concept to justify believing it. Yes, some people are far more persuasive than others. But how can you extrapolate that far beyond the distribution we obverse in humans? I do think AI will prove to better than humans at this, and likely muchbetter.
But “much” better isn’t the same as “better enough to be effectively treated as magic”.
Well, even the tail of the human distribution is pretty scary. A single human with a lot of social skills can become the leader of a whole nation, or even a prophet considered literally a divine being. This has already happened several times in history, even in times where you had to be physically close to people to convince them.
A few things I’ve seen give pretty worrying lower bounds for how persuasive a superintelligence would be:
How it feels to have your mind hacked by an AI
The AI in a box boxes you (content warning: creepy blackmail-y acausal stuff)
Remember that a superintelligence will be at least several orders of magnitude more persuasive than character.ai or Stuart Armstrong.
Believing this seems central to believing high P(doom).
But, I think it’s not a coherent enough concept to justify believing it. Yes, some people are far more persuasive than others. But how can you extrapolate that far beyond the distribution we obverse in humans? I do think AI will prove to better than humans at this, and likely much better.
But “much” better isn’t the same as “better enough to be effectively treated as magic”.
Well, even the tail of the human distribution is pretty scary. A single human with a lot of social skills can become the leader of a whole nation, or even a prophet considered literally a divine being. This has already happened several times in history, even in times where you had to be physically close to people to convince them.