Most of the catastrophic risk from AI still lies in superhuman agentic systems.
Current frontier systems are not that (and IMO not poised to become that in the very immediate future).
I think AI risk advocates should be clear that they’re not saying GPT-5/Claude Next is an existential threat to humanity.
[Unless they actually believe that. But if they don’t, I’m a bit concerned that their message is being rounded up to that, and when such systems don’t reveal themselves to be catastrophically dangerous, it might erode their credibility.]
Most of the catastrophic risk from AI still lies in superhuman agentic systems.
Current frontier systems are not that (and IMO not poised to become that in the very immediate future).
I think AI risk advocates should be clear that they’re not saying GPT-5/Claude Next is an existential threat to humanity.
[Unless they actually believe that. But if they don’t, I’m a bit concerned that their message is being rounded up to that, and when such systems don’t reveal themselves to be catastrophically dangerous, it might erode their credibility.]