A person is safe when he knows that threats do not exist. Not when he does not know whether threats exist.
In my opinion it is the same here. Agent without known goal is not goalless agent. It needs to know everything to come to a conclusion that it is goalless. Which implies that this ‘ought’ statement is inherent, not assumed.
It seems that I fail to communicate my point. Let me clarify.
In my opinion the optimal behavior is:
if you know your goal—pursue it
if you know that you don’t have a goal—anything, doesn’t matter
if you don’t know your goal—prepare for any goal
This is a common mistake to assume, that if you don’t know your goal, then it does not exist. But this mistake is uncommon in other contexts. For example
as I previously mentioned a person is not considered safe, if threats are unknown. A person is considered safe if it is known that threats do not exist. If threats are unknown it is optimal to gather more information about environment, which is closer to “prepare for any goal”
we have not discovered aliens yet, but we do not assume they don’t exist. Even contrary, we call it Fermi paradox and investigate it, which is closer to “prepare for any goal”
health organizations promote regular health-checks, because not knowing whether you are sick does not prove that you are not, which is also closer to “prepare for any goal”
I don’t think “pursue all possible goals” (which is what “prepare for any goal” really means) is possible. You need a step (or a cycle) for “discover and refine your knowledge of your goal”.
Or the more common human (semi-rational) technique of “assume your goal is very similar to those around you”.
Yeah, but answer a question “why should agent care about ‘preparing’?” Then any answer you give will yield “why this?” ad infinitum. So this chain of “whys” cannot be stopped unless you specify some terminal point. And the moment you do specify such a point, you introduce an “ought” statement.
Yes, but this ‘ought’ statement is not assumed.
Let me share a different example, hope it helps
In my opinion it is the same here. Agent without known goal is not goalless agent. It needs to know everything to come to a conclusion that it is goalless. Which implies that this ‘ought’ statement is inherent, not assumed.
I think you have reasoned yourself into thinking that a goal is only a goal if you know about it or if it is explicit.
A goalless agent won’t do anything, the act of inspecting itself (or whatever is implied in “know everything) is a goal in a on itself.
In which case it has one goal “Answer the question: Am I goalless?”
It seems that I fail to communicate my point. Let me clarify.
In my opinion the optimal behavior is:
if you know your goal—pursue it
if you know that you don’t have a goal—anything, doesn’t matter
if you don’t know your goal—prepare for any goal
This is a common mistake to assume, that if you don’t know your goal, then it does not exist. But this mistake is uncommon in other contexts. For example
as I previously mentioned a person is not considered safe, if threats are unknown. A person is considered safe if it is known that threats do not exist. If threats are unknown it is optimal to gather more information about environment, which is closer to “prepare for any goal”
we have not discovered aliens yet, but we do not assume they don’t exist. Even contrary, we call it Fermi paradox and investigate it, which is closer to “prepare for any goal”
health organizations promote regular health-checks, because not knowing whether you are sick does not prove that you are not, which is also closer to “prepare for any goal”
This epistemological rule is called Hitchens’s razor.
Does it make sense?
I don’t think “pursue all possible goals” (which is what “prepare for any goal” really means) is possible. You need a step (or a cycle) for “discover and refine your knowledge of your goal”.
Or the more common human (semi-rational) technique of “assume your goal is very similar to those around you”.
Why so? In my opinion “prepare for any goal” is basically Power Seeking
Yeah, but answer a question “why should agent care about ‘preparing’?” Then any answer you give will yield “why this?” ad infinitum. So this chain of “whys” cannot be stopped unless you specify some terminal point. And the moment you do specify such a point, you introduce an “ought” statement.
Why do you think an assumption that there is no inherent “ought” statement is better than assumption that there is?
It is not the case that anything that happens , happens because of a goal.