I strongly disagree with this: diagonalization arguments often cannot be avoided at all, not matter how you change the setup. …
There is a trick that reliably gets you out of such paradoxes, however: switch to probabilistic mixtures.
Fair enough, the probabilistic mixtures thing was what I was thinking of as a change of setup, but reasonable to not consider it such.
the message would consist of essentially uniformly random bits
I don’t see how this is implied. If a fact is consistent across levels, and determined in a non-paradoxical way, can’t this become a natural fixed point that can be “transmitted” across levels? And isn’t this kind of knowledge all that is required for the malign prior argument to work?
The problem is that the act of leaving the message depends on the output of the oracle (otherwise you wouldn’t need the oracle at all, but you also would not know how to leave a message). If the behavior of the machine depends on the oracle’s actions, then we have to be careful with what the fixed point will be.
For example, if we try to fight the oracle and do the opposite, we get the “noise” situation from the grandfather paradox.
But if we try to cooperate with the oracle and do what it predicts, then there are many different fixed points and no telling which the oracle would choose (this is not specified in the setting).
It would be great to see a formal model of the situation. I think any model in which such message transmission would work is likely to require some heroic assumptions which don’t correspond much to real life.
If the only transmissible message is essentially uniformly random bits, then of what value is the oracle?
I claim the message can contain lots of information. E.g. if there are 2^100 potential actions, but only 2 fixed points, then 99 bits have been transmitted (relative to uniform).
The rock-paper-scissors example is relatively special, in that the oracle can’t narrow down the space of actions at all.
The UP situation looks to me to be more like the first situation than the second.
It would help to have a more formal model, but as far as I can tell the oracle can only narrow down its predictions of the future to the extent that those predictions are independent of the oracle’s output. That is to say, if the people in the universe ignore what the oracle says, then the oracle can give an informative prediction.
This would seem to exactly rule out any type of signal which depends on the oracle’s output, which is precisely the types of signals that nostalgebraist was concerned about.
That can’t be right in general. Normal nash equilibria can narrow down predictions of actions. E.g. competition game. This is despite each player’s decision being dependent on the other player’s action.
We need a proper mathematical model to study this further. I expect it to be difficult to set up because the situation is so unrealistic/impossible as to be hard to model. But if you do have a model in mind I’ll take a look
Fair enough, the probabilistic mixtures thing was what I was thinking of as a change of setup, but reasonable to not consider it such.
I don’t see how this is implied. If a fact is consistent across levels, and determined in a non-paradoxical way, can’t this become a natural fixed point that can be “transmitted” across levels? And isn’t this kind of knowledge all that is required for the malign prior argument to work?
The problem is that the act of leaving the message depends on the output of the oracle (otherwise you wouldn’t need the oracle at all, but you also would not know how to leave a message). If the behavior of the machine depends on the oracle’s actions, then we have to be careful with what the fixed point will be.
For example, if we try to fight the oracle and do the opposite, we get the “noise” situation from the grandfather paradox.
But if we try to cooperate with the oracle and do what it predicts, then there are many different fixed points and no telling which the oracle would choose (this is not specified in the setting).
It would be great to see a formal model of the situation. I think any model in which such message transmission would work is likely to require some heroic assumptions which don’t correspond much to real life.
If the only transmissible message is essentially uniformly random bits, then of what value is the oracle?
I claim the message can contain lots of information. E.g. if there are 2^100 potential actions, but only 2 fixed points, then 99 bits have been transmitted (relative to uniform).
The rock-paper-scissors example is relatively special, in that the oracle can’t narrow down the space of actions at all.
The UP situation looks to me to be more like the first situation than the second.
It would help to have a more formal model, but as far as I can tell the oracle can only narrow down its predictions of the future to the extent that those predictions are independent of the oracle’s output. That is to say, if the people in the universe ignore what the oracle says, then the oracle can give an informative prediction.
This would seem to exactly rule out any type of signal which depends on the oracle’s output, which is precisely the types of signals that nostalgebraist was concerned about.
That can’t be right in general. Normal nash equilibria can narrow down predictions of actions. E.g. competition game. This is despite each player’s decision being dependent on the other player’s action.
That’s fair, yeah
We need a proper mathematical model to study this further. I expect it to be difficult to set up because the situation is so unrealistic/impossible as to be hard to model. But if you do have a model in mind I’ll take a look