For example, B could be an agent that always defects and we could want to counterfactually calculate what B would do in town.
But then you are not considering a decision. My comments were under the assumption that there is a decision to make, not an impossible situation to construct.
UDT assumes that the agent has a Mathematical Intuition Function so the input is only real observations.
I don’t understand this statement (I’m thinking of UDT1.1, i.e. decision is a decision about strategy, so there is no input to consider during decision making).
What did I say that made you think I might have believed this?
I was mistaken in thinking that you were discussing decision making by a particular agent, in which case this was a possible source of contradictions in descriptions of situations. Still not clear what motivates considering the contradictory situations, what kinds of situations are to be considered, and what this has to do with UDT.
My comments were under the assumption that there is a decision to make, not an impossible situation to construct.
Well, the question is what should you do in Parfit’s Hitchhiker with a perfect predictor. And before you can even talk about the predictor, you need to define what it predicts. Maybe it would have been clearer if I’d written, “B could be an agent that defects in any coherent situation and we want to construct a coherent counterfactual so that the predictor can predict it defecting”
UDT assumes that the agent has a Mathematical Intuition Function so the input is only real observations.
I wrote this last sentence with UDT 1.0 in mind, which makes it confusing as I referred to Input-Output maps which are part of UDT 1.1. In UDT 1.0, even though you don’t perform Bayesian updates on input, they determine the observer set that is considered. Maybe it’d help to say that I think of UDT 1.1 as a modified version of UDT 1.0.
Still not clear what motivates considering the contradictory situations, what kinds of situations are to be considered, and what this has to do with UDT.
UDT is often argued to solve problems like Parfit’s Hitchhiker
I see. I think you are right, there is something wrong with Parfit’s Hitchhiker, when it’s understood in the way you did in the post, and UDT can’t handle this either.
My guess is that usually it’s understood differently, and I wasn’t following the way you understood it in the post. The desert and the town are usually taken to be separate, so that they can always be instantiated separately, no matter what predictor in the desert or agent in town decide. So it’s fine to have an agent in town with the memory of the predictor expecting them to not pay in town and not taking them there (and then for that agent to decide to pay).
It’s an impossible situation similar to open box Newcomb’s problem, but still a situation where the agent can be located, for purposes of finding dependencies to maximize over. These impossible situations need to be taken as “real enough” to find the agent in them. The dependency-based approach favored in UDT, TDT and now Functional Decision Theory doesn’t help with clarifying this issue. For these, there’s only the “living in impossible situations” philosophy that I alluded to in comments to your previous post that helps with setting up the problems so that they can be understood in terms of dependencies. Your take on this was to deny impossible situations and replace them with observations, which is easier to describe, but more difficult to reason about in unexpected examples.
(See also this comment for another way of tackling this issue.)
I see. I think you are right, there is something wrong with Parfit’s Hitchhiker, when it’s understood in the way you did in the post, and UDT can’t handle this either.
This statement confuses me. My argument is that UDT already does do this, but that it does so without explicit explanation or justification of what it is doing.
So it’s fine to have an agent in town with the memory of the predictor expecting them to not pay in town and not taking them there
Hmm… An agent that defects in any possible situation, for example, can figure out that the situation with this memory is impossible is impossible. So perhaps they’re using a para-consistent logic. This would still work on a representation of a system, rather than the system. But the problem with doing this is that it assumes that the agent has the ability to represent para-consistent situations. And without knowing anything about para-consistent logic, I would suspect that there would be multiple approaches. How can we justify a specific approach? It seems much easier to avoid all of this and work directly with the inputs given that any real agent ultimately works on inputs. Or even if we do adopt a para-consistent logic, it seems like the justification for choosing the specific logic would be ultimately grounded in inputs.
Your take on this was to deny impossible situations and replace them with observations, which is easier to describe, but more difficult to reason about in unexpected examples.
But then you are not considering a decision. My comments were under the assumption that there is a decision to make, not an impossible situation to construct.
I don’t understand this statement (I’m thinking of UDT1.1, i.e. decision is a decision about strategy, so there is no input to consider during decision making).
I was mistaken in thinking that you were discussing decision making by a particular agent, in which case this was a possible source of contradictions in descriptions of situations. Still not clear what motivates considering the contradictory situations, what kinds of situations are to be considered, and what this has to do with UDT.
Well, the question is what should you do in Parfit’s Hitchhiker with a perfect predictor. And before you can even talk about the predictor, you need to define what it predicts. Maybe it would have been clearer if I’d written, “B could be an agent that defects in any coherent situation and we want to construct a coherent counterfactual so that the predictor can predict it defecting”
I wrote this last sentence with UDT 1.0 in mind, which makes it confusing as I referred to Input-Output maps which are part of UDT 1.1. In UDT 1.0, even though you don’t perform Bayesian updates on input, they determine the observer set that is considered. Maybe it’d help to say that I think of UDT 1.1 as a modified version of UDT 1.0.
UDT is often argued to solve problems like Parfit’s Hitchhiker
I see. I think you are right, there is something wrong with Parfit’s Hitchhiker, when it’s understood in the way you did in the post, and UDT can’t handle this either.
My guess is that usually it’s understood differently, and I wasn’t following the way you understood it in the post. The desert and the town are usually taken to be separate, so that they can always be instantiated separately, no matter what predictor in the desert or agent in town decide. So it’s fine to have an agent in town with the memory of the predictor expecting them to not pay in town and not taking them there (and then for that agent to decide to pay).
It’s an impossible situation similar to open box Newcomb’s problem, but still a situation where the agent can be located, for purposes of finding dependencies to maximize over. These impossible situations need to be taken as “real enough” to find the agent in them. The dependency-based approach favored in UDT, TDT and now Functional Decision Theory doesn’t help with clarifying this issue. For these, there’s only the “living in impossible situations” philosophy that I alluded to in comments to your previous post that helps with setting up the problems so that they can be understood in terms of dependencies. Your take on this was to deny impossible situations and replace them with observations, which is easier to describe, but more difficult to reason about in unexpected examples.
(See also this comment for another way of tackling this issue.)
This statement confuses me. My argument is that UDT already does do this, but that it does so without explicit explanation or justification of what it is doing.
Hmm… An agent that defects in any possible situation, for example, can figure out that the situation with this memory is impossible is impossible. So perhaps they’re using a para-consistent logic. This would still work on a representation of a system, rather than the system. But the problem with doing this is that it assumes that the agent has the ability to represent para-consistent situations. And without knowing anything about para-consistent logic, I would suspect that there would be multiple approaches. How can we justify a specific approach? It seems much easier to avoid all of this and work directly with the inputs given that any real agent ultimately works on inputs. Or even if we do adopt a para-consistent logic, it seems like the justification for choosing the specific logic would be ultimately grounded in inputs.
How so? As I said, UDT already seems to do this.