Superintelligences don’t necessarily have goals, and could arrive gradually. A jump to agentive, goal driven ASI is the worst case scenario, but it’s also conjunctive.
It’s not meant as a projection for what is likely to happen, it’s meant as a toy model that makes it easier to think baout what sorts of goals we would like to give our AI.
We already have some systems with goals. They seem to mostly fail in the direction of wireheading, which is not catastrophic.
Yes but I was talking about artificial superintelligences, not just any system with goals.
Superintelligences don’t necessarily have goals, and could arrive gradually. A jump to agentive, goal driven ASI is the worst case scenario, but it’s also conjunctive.
It’s not meant as a projection for what is likely to happen, it’s meant as a toy model that makes it easier to think baout what sorts of goals we would like to give our AI.
Well, I already answered that question.
Maybe, but then I don’t see your answer.