So you are implicitly assuming that the agent cares about certain things, such as its future states.
But the is-ought problem is the very observation that “there seems to be a significant difference between descriptive or positive statements (about what is) and prescriptive or normative statements (about what ought to be), and that it is not obvious how one can coherently move from descriptive statements to prescriptive ones”.
You have not solved the problem, you have merely assumed it to be solved, without proof.
If you are reasoning about all possible agents that could ever exist you are not allowed to assume either of these.
But you are in fact making such assumptions, so you are not reasoning about all possible agents, you are reasoning about some more narrow class of agents (and your conclusions may indeed be correct, for these agents. But it’s not relevant to the orthogonality thesis).
My proposition is that all intelligent agents will converge to “prepare for any goal” (basically Power Seeking), which is the opposite of what Orthogonality Thesis states.
So you are implicitly assuming that the agent cares about certain things, such as its future states.
But the is-ought problem is the very observation that “there seems to be a significant difference between descriptive or positive statements (about what is) and prescriptive or normative statements (about what ought to be), and that it is not obvious how one can coherently move from descriptive statements to prescriptive ones”.
You have not solved the problem, you have merely assumed it to be solved, without proof.
There are 2 propositions here:
Agent does not do anything unless a goal is assigned
Agent does not do anything if it is certain that a goal will never be assigned
Which one do you think is assumed without a proof? In my opinion 1st
If you are reasoning about all possible agents that could ever exist you are not allowed to assume either of these.
But you are in fact making such assumptions, so you are not reasoning about all possible agents, you are reasoning about some more narrow class of agents (and your conclusions may indeed be correct, for these agents. But it’s not relevant to the orthogonality thesis).
I do not agree.
My proposition is that all intelligent agents will converge to “prepare for any goal” (basically Power Seeking), which is the opposite of what Orthogonality Thesis states.