If you are reasoning about all possible agents that could ever exist you are not allowed to assume either of these.
But you are in fact making such assumptions, so you are not reasoning about all possible agents, you are reasoning about some more narrow class of agents (and your conclusions may indeed be correct, for these agents. But it’s not relevant to the orthogonality thesis).
My proposition is that all intelligent agents will converge to “prepare for any goal” (basically Power Seeking), which is the opposite of what Orthogonality Thesis states.
If you are reasoning about all possible agents that could ever exist you are not allowed to assume either of these.
But you are in fact making such assumptions, so you are not reasoning about all possible agents, you are reasoning about some more narrow class of agents (and your conclusions may indeed be correct, for these agents. But it’s not relevant to the orthogonality thesis).
I do not agree.
My proposition is that all intelligent agents will converge to “prepare for any goal” (basically Power Seeking), which is the opposite of what Orthogonality Thesis states.