+1 and many thanks for wading into this with me… I’ve been working all day and I’m still at work so can’t necessarily respond in full...
I agree that these problems are a lot simpler if reducing my uncertainty about X cannot help me affect X. This is not a minor class of problems. I’d love to have better information for a lot of problems in this class. That said, many of the problems that it seems most worthwhile for me to spend my time and money reducing my uncertainty about are of the type where I have a non-trivial role in how they play out. Assuming I do have some causal power over X, I think I’d pay a lot more to know the “equilibrium” probability of X after I’ve digested the information the oracle gave me—anything else seems like stale information… but learning that equilibrium probability seems weird as well. If I’m surprised by what the oracle says, then I imagine I’d ask myself questions like: how am I likely to react in regard to this information… what was the probability before I knew this information such that the current probability is what it is… It feels like I’m losing freedom… to what extent is the experience of uncertainty tied to the experience of freedom?
The equilibrium probability might not be well defined. (E.g., if for whatever reason you form a sufficiently firm intention to falsify whatever the oracle tells you.)
And yes, if the oracle tells you something about your own future actions—which it has to, to give you an equilibrium probability—it’s unsurprising that you’re going to feel a loss of freedom. Either that, or disbelieve the oracle.
The equilibrium probability might not be well defined. (E.g., if for whatever reason you form a sufficiently firm intention to falsify whatever the oracle tells you.)
The point (or at least a point) is that there might not be a fixed point. I suppose what that might mean is that in a universe containing such a predictor you’re unable to form such intentions as would lead to the failure. This seems sufficiently far removed from the real world that it would probably be better to consider scenarios that don’t flirt so brazenly with paradox.
The closed-timelike-curve paper weakens the guarantee of the predictor, so instead of saying “A with 25% probability, B with 75% probability”, it will stochastically say either “A” or “B”. So the kind of fixpoint it considers is less impressive.
What does it mean for a probability not to be well defined in this context? I mean, I think I share the intuition, but I’m not really comfortable with it either. Doesn’t it seem strange that a probability could be well defined until I start learning more about it and trying to change it? How little do I have to care about the probability before it becomes well defined again?
As soon as the oracle is trying to make predictions that are affected by what the oracle says, the problem she has to solve shifts from “estimate the probabilities” to “choose what information to give, so as to produce consistent results given how that information will affect what happens”. In some cases there might not be anything she can say that yields consistent results. Exactly where (if at all) that becomes impossible depends on the details of the situation.
+1 and many thanks for wading into this with me… I’ve been working all day and I’m still at work so can’t necessarily respond in full...
I agree that these problems are a lot simpler if reducing my uncertainty about X cannot help me affect X. This is not a minor class of problems. I’d love to have better information for a lot of problems in this class. That said, many of the problems that it seems most worthwhile for me to spend my time and money reducing my uncertainty about are of the type where I have a non-trivial role in how they play out. Assuming I do have some causal power over X, I think I’d pay a lot more to know the “equilibrium” probability of X after I’ve digested the information the oracle gave me—anything else seems like stale information… but learning that equilibrium probability seems weird as well. If I’m surprised by what the oracle says, then I imagine I’d ask myself questions like: how am I likely to react in regard to this information… what was the probability before I knew this information such that the current probability is what it is… It feels like I’m losing freedom… to what extent is the experience of uncertainty tied to the experience of freedom?
The equilibrium probability might not be well defined. (E.g., if for whatever reason you form a sufficiently firm intention to falsify whatever the oracle tells you.)
And yes, if the oracle tells you something about your own future actions—which it has to, to give you an equilibrium probability—it’s unsurprising that you’re going to feel a loss of freedom. Either that, or disbelieve the oracle.
I guess it would still be well-defined as a fixpoint though, like in “Closed Timelike Curves Make Quantum and Classical Computing Equivalent”. Although by the same paper, it would be computationally infeasible for a predictor to actually find the fixpoint...
The point (or at least a point) is that there might not be a fixed point. I suppose what that might mean is that in a universe containing such a predictor you’re unable to form such intentions as would lead to the failure. This seems sufficiently far removed from the real world that it would probably be better to consider scenarios that don’t flirt so brazenly with paradox.
Yeah, you’re are right.
The closed-timelike-curve paper weakens the guarantee of the predictor, so instead of saying “A with 25% probability, B with 75% probability”, it will stochastically say either “A” or “B”. So the kind of fixpoint it considers is less impressive.
What does it mean for a probability not to be well defined in this context? I mean, I think I share the intuition, but I’m not really comfortable with it either. Doesn’t it seem strange that a probability could be well defined until I start learning more about it and trying to change it? How little do I have to care about the probability before it becomes well defined again?
As soon as the oracle is trying to make predictions that are affected by what the oracle says, the problem she has to solve shifts from “estimate the probabilities” to “choose what information to give, so as to produce consistent results given how that information will affect what happens”. In some cases there might not be anything she can say that yields consistent results. Exactly where (if at all) that becomes impossible depends on the details of the situation.