I see. I think you are right, there is something wrong with Parfit’s Hitchhiker, when it’s understood in the way you did in the post, and UDT can’t handle this either.
This statement confuses me. My argument is that UDT already does do this, but that it does so without explicit explanation or justification of what it is doing.
So it’s fine to have an agent in town with the memory of the predictor expecting them to not pay in town and not taking them there
Hmm… An agent that defects in any possible situation, for example, can figure out that the situation with this memory is impossible is impossible. So perhaps they’re using a para-consistent logic. This would still work on a representation of a system, rather than the system. But the problem with doing this is that it assumes that the agent has the ability to represent para-consistent situations. And without knowing anything about para-consistent logic, I would suspect that there would be multiple approaches. How can we justify a specific approach? It seems much easier to avoid all of this and work directly with the inputs given that any real agent ultimately works on inputs. Or even if we do adopt a para-consistent logic, it seems like the justification for choosing the specific logic would be ultimately grounded in inputs.
Your take on this was to deny impossible situations and replace them with observations, which is easier to describe, but more difficult to reason about in unexpected examples.
This statement confuses me. My argument is that UDT already does do this, but that it does so without explicit explanation or justification of what it is doing.
Hmm… An agent that defects in any possible situation, for example, can figure out that the situation with this memory is impossible is impossible. So perhaps they’re using a para-consistent logic. This would still work on a representation of a system, rather than the system. But the problem with doing this is that it assumes that the agent has the ability to represent para-consistent situations. And without knowing anything about para-consistent logic, I would suspect that there would be multiple approaches. How can we justify a specific approach? It seems much easier to avoid all of this and work directly with the inputs given that any real agent ultimately works on inputs. Or even if we do adopt a para-consistent logic, it seems like the justification for choosing the specific logic would be ultimately grounded in inputs.
How so? As I said, UDT already seems to do this.