The primary point of my comment was to argue that an agent that has a goal in the strong sense would not abandon its goal as a result of philosophical consideration. Your response seems more directed at my afterthought about how our intuitions based on human experience would cause us to miss the primary point.
I think that we humans do have goals, despite not being able to consistantly pursue them. I want myself and my fellow humans to continue our subjective experiences of life in enjoyable ways, without modifying what we enjoy. This includes connections to other people, novel experiences, high challenge, etc. There is, of course, much work to be done to complete this list and fully define all the high level concepts, but in the end I think there are real goals there, which I would like to be embodied in a powerful agent that actually runs a coherent decision theory. Philosophy probably has to play some role in clarifying our “pseudo-goals” as actual goals, but so does looking at our “pseudo-goals”, however arbitrary they may be.
The primary point of my comment was to argue that an agent that has a goal in the strong sense would not abandon its goal as a result of philosophical consideration.
Such an agent would also not change its decision theory as a result of philosophical consideration, which potentially limits its power.
Philosophy probably has to play some role in clarifying our “pseudo-goals” as actual goals, but so does looking at our “pseudo-goals”, however arbitrary they may be.
I wouldn’t argue against this as written, but Stuart was claiming that convergence is “very unlikely” which I think is too strong.
Such an agent would also not change its decision theory as a result of philosophical consideration, which potentially limits its power.
I don’t think that follows, or at least the agent could change its decision theory as a result of some consideration, which may or may not be “philosophical”. We already have the example that a CDT agent that learns in advance it will face Newcomb’s problem could predict it would do better if it switched to TDT.
“ability to improve decision theory via philosophical reasoning” (as opposed to CDT-AI changing into XDT and then being stuck with that)
XDT (or in Eliezer’s words, “crippled and inelegant form of TDT”) is closer to TDT but still worse. For example, XDT would fail to acausally control/trade with other agents living before the time of its self-modification, or in other possible worlds.
Ah, yes, I agree that CDT would modify to XDT rather than TDT, though the fact that it self modifies at all shows that goal driven agents can change decision theories because the new decision theory helps it achieve its goal. I do think that it’s important to consider how a particular decision theory can decide to self modify, and to design an agent with a decision theory that can self modify in good ways.
Not strictly. If strongly goal’d agent determines that a different decision theory (or any change to itself) better maximizes its goal, it would adopt that new decision theory or change.
The primary point of my comment was to argue that an agent that has a goal in the strong sense would not abandon its goal as a result of philosophical consideration. Your response seems more directed at my afterthought about how our intuitions based on human experience would cause us to miss the primary point.
I think that we humans do have goals, despite not being able to consistantly pursue them. I want myself and my fellow humans to continue our subjective experiences of life in enjoyable ways, without modifying what we enjoy. This includes connections to other people, novel experiences, high challenge, etc. There is, of course, much work to be done to complete this list and fully define all the high level concepts, but in the end I think there are real goals there, which I would like to be embodied in a powerful agent that actually runs a coherent decision theory. Philosophy probably has to play some role in clarifying our “pseudo-goals” as actual goals, but so does looking at our “pseudo-goals”, however arbitrary they may be.
Such an agent would also not change its decision theory as a result of philosophical consideration, which potentially limits its power.
I wouldn’t argue against this as written, but Stuart was claiming that convergence is “very unlikely” which I think is too strong.
I don’t think that follows, or at least the agent could change its decision theory as a result of some consideration, which may or may not be “philosophical”. We already have the example that a CDT agent that learns in advance it will face Newcomb’s problem could predict it would do better if it switched to TDT.
I wrote earlier
XDT (or in Eliezer’s words, “crippled and inelegant form of TDT”) is closer to TDT but still worse. For example, XDT would fail to acausally control/trade with other agents living before the time of its self-modification, or in other possible worlds.
Ah, yes, I agree that CDT would modify to XDT rather than TDT, though the fact that it self modifies at all shows that goal driven agents can change decision theories because the new decision theory helps it achieve its goal. I do think that it’s important to consider how a particular decision theory can decide to self modify, and to design an agent with a decision theory that can self modify in good ways.
Not strictly. If strongly goal’d agent determines that a different decision theory (or any change to itself) better maximizes its goal, it would adopt that new decision theory or change.