You can obviously create AIs that don’t try to achieve anything in the world, and sometimes they are useful for various reasons, but some people who are trying to achieve things in the world find it to be a good idea to make AIs that also try to achieve things in the world, and the existence of these people is sufficient to create existential risk.
You can obviously create AIs that don’t try to achieve anything in the world, and sometimes they are useful for various reasons, but some people who are trying to achieve things in the world find it to be a good idea to make AIs that also try to achieve things in the world, and the existence of these people is sufficient to create existential risk.
But it’s not the OT telling you that.
The orthogonality thesis is indeed not sufficient to derive everything of AI safety, but that doesn’t mean it’s trivial.