Regarding “CAIS scenario” vs “general AI” scenarios, I think that there are strong forces pushing towards the latter. For any actor that’s interested in economic, political or military gain, there are large returns on long-term planning applied to open-ended real-world problems. Therefore there are strong incentives to create systems capable of that. As you correctly notice, such systems will eventually converge to extremely malicious strategies. So, there is a tragedy of the commons pushing towards the deployment of many powerful and general AI systems. In the short-term these systems benefit the actors deploying them, in the long-term they destroy all human value.
Regarding “CAIS scenario” vs “general AI” scenarios, I think that there are strong forces pushing towards the latter. For any actor that’s interested in economic, political or military gain, there are large returns on long-term planning applied to open-ended real-world problems. Therefore there are strong incentives to create systems capable of that. As you correctly notice, such systems will eventually converge to extremely malicious strategies. So, there is a tragedy of the commons pushing towards the deployment of many powerful and general AI systems. In the short-term these systems benefit the actors deploying them, in the long-term they destroy all human value.