I see. I think you are right, there is something wrong with Parfit’s Hitchhiker, when it’s understood in the way you did in the post, and UDT can’t handle this either.
My guess is that usually it’s understood differently, and I wasn’t following the way you understood it in the post. The desert and the town are usually taken to be separate, so that they can always be instantiated separately, no matter what predictor in the desert or agent in town decide. So it’s fine to have an agent in town with the memory of the predictor expecting them to not pay in town and not taking them there (and then for that agent to decide to pay).
It’s an impossible situation similar to open box Newcomb’s problem, but still a situation where the agent can be located, for purposes of finding dependencies to maximize over. These impossible situations need to be taken as “real enough” to find the agent in them. The dependency-based approach favored in UDT, TDT and now Functional Decision Theory doesn’t help with clarifying this issue. For these, there’s only the “living in impossible situations” philosophy that I alluded to in comments to your previous post that helps with setting up the problems so that they can be understood in terms of dependencies. Your take on this was to deny impossible situations and replace them with observations, which is easier to describe, but more difficult to reason about in unexpected examples.
(See also this comment for another way of tackling this issue.)
I see. I think you are right, there is something wrong with Parfit’s Hitchhiker, when it’s understood in the way you did in the post, and UDT can’t handle this either.
This statement confuses me. My argument is that UDT already does do this, but that it does so without explicit explanation or justification of what it is doing.
So it’s fine to have an agent in town with the memory of the predictor expecting them to not pay in town and not taking them there
Hmm… An agent that defects in any possible situation, for example, can figure out that the situation with this memory is impossible is impossible. So perhaps they’re using a para-consistent logic. This would still work on a representation of a system, rather than the system. But the problem with doing this is that it assumes that the agent has the ability to represent para-consistent situations. And without knowing anything about para-consistent logic, I would suspect that there would be multiple approaches. How can we justify a specific approach? It seems much easier to avoid all of this and work directly with the inputs given that any real agent ultimately works on inputs. Or even if we do adopt a para-consistent logic, it seems like the justification for choosing the specific logic would be ultimately grounded in inputs.
Your take on this was to deny impossible situations and replace them with observations, which is easier to describe, but more difficult to reason about in unexpected examples.
I see. I think you are right, there is something wrong with Parfit’s Hitchhiker, when it’s understood in the way you did in the post, and UDT can’t handle this either.
My guess is that usually it’s understood differently, and I wasn’t following the way you understood it in the post. The desert and the town are usually taken to be separate, so that they can always be instantiated separately, no matter what predictor in the desert or agent in town decide. So it’s fine to have an agent in town with the memory of the predictor expecting them to not pay in town and not taking them there (and then for that agent to decide to pay).
It’s an impossible situation similar to open box Newcomb’s problem, but still a situation where the agent can be located, for purposes of finding dependencies to maximize over. These impossible situations need to be taken as “real enough” to find the agent in them. The dependency-based approach favored in UDT, TDT and now Functional Decision Theory doesn’t help with clarifying this issue. For these, there’s only the “living in impossible situations” philosophy that I alluded to in comments to your previous post that helps with setting up the problems so that they can be understood in terms of dependencies. Your take on this was to deny impossible situations and replace them with observations, which is easier to describe, but more difficult to reason about in unexpected examples.
(See also this comment for another way of tackling this issue.)
This statement confuses me. My argument is that UDT already does do this, but that it does so without explicit explanation or justification of what it is doing.
Hmm… An agent that defects in any possible situation, for example, can figure out that the situation with this memory is impossible is impossible. So perhaps they’re using a para-consistent logic. This would still work on a representation of a system, rather than the system. But the problem with doing this is that it assumes that the agent has the ability to represent para-consistent situations. And without knowing anything about para-consistent logic, I would suspect that there would be multiple approaches. How can we justify a specific approach? It seems much easier to avoid all of this and work directly with the inputs given that any real agent ultimately works on inputs. Or even if we do adopt a para-consistent logic, it seems like the justification for choosing the specific logic would be ultimately grounded in inputs.
How so? As I said, UDT already seems to do this.