I think we don’t just lack introspective access to our goals, but can’t be said to have goals at all (in the sense of preference ordering over some well defined ontology, attached to some decision theory that we’re actually running). For the kind of pseudo-goals we have (behavior tendencies and semantically unclear values expressed in natural language), they don’t seem to have the motivational strength to make us think “I should keep my goal G1 instead of avoiding arbitrariness”, nor is it clear what it would mean to “keep” such pseudo-goals as one self-improves.
What if it’s the case that evolution always or almost always produces agents like us, so the only way they can get real goals in the first place is via philosophy?
The primary point of my comment was to argue that an agent that has a goal in the strong sense would not abandon its goal as a result of philosophical consideration. Your response seems more directed at my afterthought about how our intuitions based on human experience would cause us to miss the primary point.
I think that we humans do have goals, despite not being able to consistantly pursue them. I want myself and my fellow humans to continue our subjective experiences of life in enjoyable ways, without modifying what we enjoy. This includes connections to other people, novel experiences, high challenge, etc. There is, of course, much work to be done to complete this list and fully define all the high level concepts, but in the end I think there are real goals there, which I would like to be embodied in a powerful agent that actually runs a coherent decision theory. Philosophy probably has to play some role in clarifying our “pseudo-goals” as actual goals, but so does looking at our “pseudo-goals”, however arbitrary they may be.
The primary point of my comment was to argue that an agent that has a goal in the strong sense would not abandon its goal as a result of philosophical consideration.
Such an agent would also not change its decision theory as a result of philosophical consideration, which potentially limits its power.
Philosophy probably has to play some role in clarifying our “pseudo-goals” as actual goals, but so does looking at our “pseudo-goals”, however arbitrary they may be.
I wouldn’t argue against this as written, but Stuart was claiming that convergence is “very unlikely” which I think is too strong.
Such an agent would also not change its decision theory as a result of philosophical consideration, which potentially limits its power.
I don’t think that follows, or at least the agent could change its decision theory as a result of some consideration, which may or may not be “philosophical”. We already have the example that a CDT agent that learns in advance it will face Newcomb’s problem could predict it would do better if it switched to TDT.
“ability to improve decision theory via philosophical reasoning” (as opposed to CDT-AI changing into XDT and then being stuck with that)
XDT (or in Eliezer’s words, “crippled and inelegant form of TDT”) is closer to TDT but still worse. For example, XDT would fail to acausally control/trade with other agents living before the time of its self-modification, or in other possible worlds.
Ah, yes, I agree that CDT would modify to XDT rather than TDT, though the fact that it self modifies at all shows that goal driven agents can change decision theories because the new decision theory helps it achieve its goal. I do think that it’s important to consider how a particular decision theory can decide to self modify, and to design an agent with a decision theory that can self modify in good ways.
Not strictly. If strongly goal’d agent determines that a different decision theory (or any change to itself) better maximizes its goal, it would adopt that new decision theory or change.
I agree that humans are not utility-maximizers or similar goal-oriented agents—not in the sense we can’t be modeled as such things, but in the sense that these models do not compress our preferences to any great degree, which happens to be because they are greatly at odds with our underlying mechanisms for determining preference and behavior.
Also, can we even get ‘real goals’ like this? We’re threading onto land of potentially proposing something as silly as blue unicorns on back side of the moon. We use goals to model other human intelligences, that is built into our language, that’s how we imagine other agents, that’s how you predict a wolf, a cat, another ape, etc. The goals are really easy within imagination (which is not reductionist and where the true paperclip count exists as a property of the ‘world’). Outside imagination, though...
I think we don’t just lack introspective access to our goals, but can’t be said to have goals at all (in the sense of preference ordering over some well defined ontology, attached to some decision theory that we’re actually running). For the kind of pseudo-goals we have (behavior tendencies and semantically unclear values expressed in natural language), they don’t seem to have the motivational strength to make us think “I should keep my goal G1 instead of avoiding arbitrariness”, nor is it clear what it would mean to “keep” such pseudo-goals as one self-improves.
What if it’s the case that evolution always or almost always produces agents like us, so the only way they can get real goals in the first place is via philosophy?
The primary point of my comment was to argue that an agent that has a goal in the strong sense would not abandon its goal as a result of philosophical consideration. Your response seems more directed at my afterthought about how our intuitions based on human experience would cause us to miss the primary point.
I think that we humans do have goals, despite not being able to consistantly pursue them. I want myself and my fellow humans to continue our subjective experiences of life in enjoyable ways, without modifying what we enjoy. This includes connections to other people, novel experiences, high challenge, etc. There is, of course, much work to be done to complete this list and fully define all the high level concepts, but in the end I think there are real goals there, which I would like to be embodied in a powerful agent that actually runs a coherent decision theory. Philosophy probably has to play some role in clarifying our “pseudo-goals” as actual goals, but so does looking at our “pseudo-goals”, however arbitrary they may be.
Such an agent would also not change its decision theory as a result of philosophical consideration, which potentially limits its power.
I wouldn’t argue against this as written, but Stuart was claiming that convergence is “very unlikely” which I think is too strong.
I don’t think that follows, or at least the agent could change its decision theory as a result of some consideration, which may or may not be “philosophical”. We already have the example that a CDT agent that learns in advance it will face Newcomb’s problem could predict it would do better if it switched to TDT.
I wrote earlier
XDT (or in Eliezer’s words, “crippled and inelegant form of TDT”) is closer to TDT but still worse. For example, XDT would fail to acausally control/trade with other agents living before the time of its self-modification, or in other possible worlds.
Ah, yes, I agree that CDT would modify to XDT rather than TDT, though the fact that it self modifies at all shows that goal driven agents can change decision theories because the new decision theory helps it achieve its goal. I do think that it’s important to consider how a particular decision theory can decide to self modify, and to design an agent with a decision theory that can self modify in good ways.
Not strictly. If strongly goal’d agent determines that a different decision theory (or any change to itself) better maximizes its goal, it would adopt that new decision theory or change.
I agree that humans are not utility-maximizers or similar goal-oriented agents—not in the sense we can’t be modeled as such things, but in the sense that these models do not compress our preferences to any great degree, which happens to be because they are greatly at odds with our underlying mechanisms for determining preference and behavior.
Also, can we even get ‘real goals’ like this? We’re threading onto land of potentially proposing something as silly as blue unicorns on back side of the moon. We use goals to model other human intelligences, that is built into our language, that’s how we imagine other agents, that’s how you predict a wolf, a cat, another ape, etc. The goals are really easy within imagination (which is not reductionist and where the true paperclip count exists as a property of the ‘world’). Outside imagination, though...