Part of my confusion is that knowing the probability I will lose my job seems certain to affect the probability that I lose my job.
Yes, and this may make the question you ask the oracle ill-posed. But you can avoid this while still making the oracle about as useful: it will tell you the probability that you lose your job in the absence of a specific response to what the oracle tells you.
Alternatively, to reduce those feedback effects we could adjust the question to reduce your influence over the thing you’re being given information about. So, suppose you know that your job performance is good, and appreciated by your employer, and have no reason to think that’s likely to change, but your job is at risk for reasons that have nothing to do with your performance: you’re at a startup that might fail to find a good enough market, or a hedge fund that’s taking risks that might wipe it out, or you’re a political representative for a party that may be swept out of power on account of decisions taken by people other than you. If you knew everything relevant about the world you’d see that the probability of such a failure is either 0 or 50%, but in fact you have no idea, and the oracle will tell you which.
In either case we get back to something nearer to a pure value-of-information problem.
So, taking the “exogenous failure” version of the second approach, my answer to your question is something like this: If I lose my job with no warning, I guess it might take three months to find another comparably good job; if I have plenty of warning, I can line something up faster. I might pay the equivalent of ~ 1 month’s take-home pay for the information. But this is still an answer based on the possibility of making a bad prediction not come to pass after all. If all I get is some advance warning that I’m going to lose my job without warning (this is reminding me of the paradox of the unexpected hanging...) then it’s less useful; let’s say ~ 2 weeks’ pay. Note that these figures would all increase, perhaps by a lot, if my estimate of my re-employability were lower.
it appears I getting an offer to reduce the actual risk
Yes, I don’t think this is a VoI problem as posed. But again we can make it one by modifying it. You have an estimate: your earnings over the next 10 years will be normally distributed with mean M and standard deviation S. The oracle will, in exchange for your payment, give you a new value of M (about which you are currently quite uncertain) along with a new smaller value of S. Your present uncertainty about the new M corresponds to the reduction in S.
Unfortunately you still have the problem from the first thought experiment, which I propose remedying in the same way: either the oracle gives you a prediction conditional on your acting as you would have without her help (so now if the income figure is depressingly low, that suggests you aren’t going to get the promotion you hoped for and you should consider looking for another job elsewhere (etc.) instead), or else you are for some reason unable to do anything to make bad predictions not come true.
Let me try to answer this question too, now it’s been made more answerable. Here’s a simplified version of the fisrt of those options: before asking the oracle I predict income M-S or M+S with equal probability (std dev is S). The oracle gives me better probabilities so as to halve the standard deviation, which means 93.3% for one and 6.7% for the other. On the occasions when it gives me a “bad” prediction (it says M-S with probability 93.3%) I switch to plan B, which (optimistically) is about as good a priori as what I was previously intending to do, which means it restores the probabilities to 50%. So (my mean—M) has gone from zero to 1⁄2 (0.933 S − 0.067 S) + 1⁄2.0 = 0.433 S. In practice my plan B is probably worse a priori than my plan A, and I suspect other simplifications I’ve made have also made the oracle’s information more valuable, so the right figure is probably somewhat less than 0.433 S (note: S here is our “three years’ worth of income”). My gut feeling is that it’s quite a lot less, e.g. because when the oracle gives you bad news you don’t know which aspects of your current plans are responsible for it. The right answer might be more like 0.1 S, or ~ 4 months’ income.
In the second version (where I’m somehow prohibited from doing anything to fix the problem, if the oracle gives a low estimate of my future earnings), again the value of the information is obviously lower. I suppose it would be useful information for pension planning. As with the first question, I’m handwavily going to estimate that the benefit is half as much in this case, so 2 months’ income.
I should add that these figures for the second problem still feel rather high to me. If an oracle actually offered me that information, I am not at all sure I’d feel willing to pay even two months’ income for it.
+1 and many thanks for wading into this with me… I’ve been working all day and I’m still at work so can’t necessarily respond in full...
I agree that these problems are a lot simpler if reducing my uncertainty about X cannot help me affect X. This is not a minor class of problems. I’d love to have better information for a lot of problems in this class. That said, many of the problems that it seems most worthwhile for me to spend my time and money reducing my uncertainty about are of the type where I have a non-trivial role in how they play out. Assuming I do have some causal power over X, I think I’d pay a lot more to know the “equilibrium” probability of X after I’ve digested the information the oracle gave me—anything else seems like stale information… but learning that equilibrium probability seems weird as well. If I’m surprised by what the oracle says, then I imagine I’d ask myself questions like: how am I likely to react in regard to this information… what was the probability before I knew this information such that the current probability is what it is… It feels like I’m losing freedom… to what extent is the experience of uncertainty tied to the experience of freedom?
The equilibrium probability might not be well defined. (E.g., if for whatever reason you form a sufficiently firm intention to falsify whatever the oracle tells you.)
And yes, if the oracle tells you something about your own future actions—which it has to, to give you an equilibrium probability—it’s unsurprising that you’re going to feel a loss of freedom. Either that, or disbelieve the oracle.
The equilibrium probability might not be well defined. (E.g., if for whatever reason you form a sufficiently firm intention to falsify whatever the oracle tells you.)
The point (or at least a point) is that there might not be a fixed point. I suppose what that might mean is that in a universe containing such a predictor you’re unable to form such intentions as would lead to the failure. This seems sufficiently far removed from the real world that it would probably be better to consider scenarios that don’t flirt so brazenly with paradox.
The closed-timelike-curve paper weakens the guarantee of the predictor, so instead of saying “A with 25% probability, B with 75% probability”, it will stochastically say either “A” or “B”. So the kind of fixpoint it considers is less impressive.
What does it mean for a probability not to be well defined in this context? I mean, I think I share the intuition, but I’m not really comfortable with it either. Doesn’t it seem strange that a probability could be well defined until I start learning more about it and trying to change it? How little do I have to care about the probability before it becomes well defined again?
As soon as the oracle is trying to make predictions that are affected by what the oracle says, the problem she has to solve shifts from “estimate the probabilities” to “choose what information to give, so as to produce consistent results given how that information will affect what happens”. In some cases there might not be anything she can say that yields consistent results. Exactly where (if at all) that becomes impossible depends on the details of the situation.
Yes, and this may make the question you ask the oracle ill-posed. But you can avoid this while still making the oracle about as useful: it will tell you the probability that you lose your job in the absence of a specific response to what the oracle tells you.
Alternatively, to reduce those feedback effects we could adjust the question to reduce your influence over the thing you’re being given information about. So, suppose you know that your job performance is good, and appreciated by your employer, and have no reason to think that’s likely to change, but your job is at risk for reasons that have nothing to do with your performance: you’re at a startup that might fail to find a good enough market, or a hedge fund that’s taking risks that might wipe it out, or you’re a political representative for a party that may be swept out of power on account of decisions taken by people other than you. If you knew everything relevant about the world you’d see that the probability of such a failure is either 0 or 50%, but in fact you have no idea, and the oracle will tell you which.
In either case we get back to something nearer to a pure value-of-information problem.
So, taking the “exogenous failure” version of the second approach, my answer to your question is something like this: If I lose my job with no warning, I guess it might take three months to find another comparably good job; if I have plenty of warning, I can line something up faster. I might pay the equivalent of ~ 1 month’s take-home pay for the information. But this is still an answer based on the possibility of making a bad prediction not come to pass after all. If all I get is some advance warning that I’m going to lose my job without warning (this is reminding me of the paradox of the unexpected hanging...) then it’s less useful; let’s say ~ 2 weeks’ pay. Note that these figures would all increase, perhaps by a lot, if my estimate of my re-employability were lower.
Yes, I don’t think this is a VoI problem as posed. But again we can make it one by modifying it. You have an estimate: your earnings over the next 10 years will be normally distributed with mean M and standard deviation S. The oracle will, in exchange for your payment, give you a new value of M (about which you are currently quite uncertain) along with a new smaller value of S. Your present uncertainty about the new M corresponds to the reduction in S.
Unfortunately you still have the problem from the first thought experiment, which I propose remedying in the same way: either the oracle gives you a prediction conditional on your acting as you would have without her help (so now if the income figure is depressingly low, that suggests you aren’t going to get the promotion you hoped for and you should consider looking for another job elsewhere (etc.) instead), or else you are for some reason unable to do anything to make bad predictions not come true.
Let me try to answer this question too, now it’s been made more answerable. Here’s a simplified version of the fisrt of those options: before asking the oracle I predict income M-S or M+S with equal probability (std dev is S). The oracle gives me better probabilities so as to halve the standard deviation, which means 93.3% for one and 6.7% for the other. On the occasions when it gives me a “bad” prediction (it says M-S with probability 93.3%) I switch to plan B, which (optimistically) is about as good a priori as what I was previously intending to do, which means it restores the probabilities to 50%. So (my mean—M) has gone from zero to 1⁄2 (0.933 S − 0.067 S) + 1⁄2.0 = 0.433 S. In practice my plan B is probably worse a priori than my plan A, and I suspect other simplifications I’ve made have also made the oracle’s information more valuable, so the right figure is probably somewhat less than 0.433 S (note: S here is our “three years’ worth of income”). My gut feeling is that it’s quite a lot less, e.g. because when the oracle gives you bad news you don’t know which aspects of your current plans are responsible for it. The right answer might be more like 0.1 S, or ~ 4 months’ income.
In the second version (where I’m somehow prohibited from doing anything to fix the problem, if the oracle gives a low estimate of my future earnings), again the value of the information is obviously lower. I suppose it would be useful information for pension planning. As with the first question, I’m handwavily going to estimate that the benefit is half as much in this case, so 2 months’ income.
I should add that these figures for the second problem still feel rather high to me. If an oracle actually offered me that information, I am not at all sure I’d feel willing to pay even two months’ income for it.
+1 and many thanks for wading into this with me… I’ve been working all day and I’m still at work so can’t necessarily respond in full...
I agree that these problems are a lot simpler if reducing my uncertainty about X cannot help me affect X. This is not a minor class of problems. I’d love to have better information for a lot of problems in this class. That said, many of the problems that it seems most worthwhile for me to spend my time and money reducing my uncertainty about are of the type where I have a non-trivial role in how they play out. Assuming I do have some causal power over X, I think I’d pay a lot more to know the “equilibrium” probability of X after I’ve digested the information the oracle gave me—anything else seems like stale information… but learning that equilibrium probability seems weird as well. If I’m surprised by what the oracle says, then I imagine I’d ask myself questions like: how am I likely to react in regard to this information… what was the probability before I knew this information such that the current probability is what it is… It feels like I’m losing freedom… to what extent is the experience of uncertainty tied to the experience of freedom?
The equilibrium probability might not be well defined. (E.g., if for whatever reason you form a sufficiently firm intention to falsify whatever the oracle tells you.)
And yes, if the oracle tells you something about your own future actions—which it has to, to give you an equilibrium probability—it’s unsurprising that you’re going to feel a loss of freedom. Either that, or disbelieve the oracle.
I guess it would still be well-defined as a fixpoint though, like in “Closed Timelike Curves Make Quantum and Classical Computing Equivalent”. Although by the same paper, it would be computationally infeasible for a predictor to actually find the fixpoint...
The point (or at least a point) is that there might not be a fixed point. I suppose what that might mean is that in a universe containing such a predictor you’re unable to form such intentions as would lead to the failure. This seems sufficiently far removed from the real world that it would probably be better to consider scenarios that don’t flirt so brazenly with paradox.
Yeah, you’re are right.
The closed-timelike-curve paper weakens the guarantee of the predictor, so instead of saying “A with 25% probability, B with 75% probability”, it will stochastically say either “A” or “B”. So the kind of fixpoint it considers is less impressive.
What does it mean for a probability not to be well defined in this context? I mean, I think I share the intuition, but I’m not really comfortable with it either. Doesn’t it seem strange that a probability could be well defined until I start learning more about it and trying to change it? How little do I have to care about the probability before it becomes well defined again?
As soon as the oracle is trying to make predictions that are affected by what the oracle says, the problem she has to solve shifts from “estimate the probabilities” to “choose what information to give, so as to produce consistent results given how that information will affect what happens”. In some cases there might not be anything she can say that yields consistent results. Exactly where (if at all) that becomes impossible depends on the details of the situation.