The idea here is that—if the agent is computable—then it can be simulated by any other computable system. So, if the map between its inputs and state, and its motor output is computable then we can make another computable system which produces the same map—since all universal computing systems can simulate each other by virtue of being Turing complete (and systems made of e.g. partial recursive functions can simulate each other too—if they are given enough memory to do so).
I don’t see how this bears on the possibility of modelling every agent by a utility-maximising agent. Dewar’s construction doesn’t work. Its simulation of an agent by a utility-maximising agent just uses the agent to simulate itself and attaches the label “utility=1” to its actions.
Dewey says pretty plainly: “any agents can be written in O-maximizer form”.
I know that he says that. I am saying, I thought pretty plainly, that I disagree with him.
He makes an O-maximiser from an agent, A. Once you have the corresponding O-maximiser, the agent A could be discarded.
He only does that in the earlier paper. His construction is as I described it: define O as doing whatever A does and label the result with utility 1. A is a part of O and cannot be discarded. He even calls this construction trivial himself, but underrates its triviality.
I don’t really understand which problem you are raising. If the O eventually contains a simulated copy of A—so what? O is still a utililty-maximiser that behaves the same way that A does if placed in the same environment.
The idea of a utility maximiser as used here is that it assigns utilities to all its possible actions and then chooses the action with the highest utility. O does that—so it qualifies as a utililty-maximiser.
The idea of a utility maximiser as used here is that it assigns utilities to all its possible actions and then chooses the action with the highest utility. O does that—so it qualifies as a utililty-maximiser.
O doesn’t assign utilities to its actions and then choose the best. It chooses its action (by simulating A), labels it with utility 1, and chooses to perform the action it just chose. The last two steps are irrelevant.
O doesn’t assign utilities to its actions and then choose the best. It chooses its action (by simulating A), labels it with utility 1, and chooses to perform the action it just chose. The last two steps are irrelevant.
“Irrelevant”? If it didin’t perform those steps, it wouldn’t be a utility maximiser, and then the proof that you can build a utility maximiser which behaves like any computable agent wouldn’t go through. Those steps are an important part of the reason for exhibiting this construction in the first place.
I don’t see how this bears on the possibility of modelling every agent by a utility-maximising agent. Dewar’s construction doesn’t work. Its simulation of an agent by a utility-maximising agent just uses the agent to simulate itself and attaches the label “utility=1” to its actions.
Dewey says pretty plainly: “any agents can be written in O-maximizer form”.
O-maximisers are just plain old utility maximisers. Dewey rechristens them “Observation-Utility Maximizers” in his reworked paper.
He makes an O-maximiser from an agent, A. Once you have the corresponding O-maximiser, the agent A could be discarded.
I know that he says that. I am saying, I thought pretty plainly, that I disagree with him.
He only does that in the earlier paper. His construction is as I described it: define O as doing whatever A does and label the result with utility 1. A is a part of O and cannot be discarded. He even calls this construction trivial himself, but underrates its triviality.
I don’t really understand which problem you are raising. If the O eventually contains a simulated copy of A—so what? O is still a utililty-maximiser that behaves the same way that A does if placed in the same environment.
The idea of a utility maximiser as used here is that it assigns utilities to all its possible actions and then chooses the action with the highest utility. O does that—so it qualifies as a utililty-maximiser.
O doesn’t assign utilities to its actions and then choose the best. It chooses its action (by simulating A), labels it with utility 1, and chooses to perform the action it just chose. The last two steps are irrelevant.
“Irrelevant”? If it didin’t perform those steps, it wouldn’t be a utility maximiser, and then the proof that you can build a utility maximiser which behaves like any computable agent wouldn’t go through. Those steps are an important part of the reason for exhibiting this construction in the first place.