Consider the variant where the Oracle demands a fee of 100 utilons after delivering the prediction, which you can’t refuse. Then the winning strategy is going to be about ensuring that the current situation is counterfactual, so that in actuality you won’t have to pay the Oracle’s fee, because the Oracle wouldn’t be able to deliver a correct prediction.
The Oracle’s prediction only has to apply to the world that is. It doesn’t have to apply to worlds that are not.
The Oracle’s prediction only has to apply to the world where the prediction is delivered. It doesn’t have to apply to the other worlds. The world where the prediction is delivered can be the world that is not, and another world can be the world that is.
“The Oracle’s prediction only has to apply to the world where the prediction is delivered”—My point was that predictions that are delivered in the factual don’t apply to counterfactuals, but the way you’ve framed it is better as it handles a more general set of cases. It seems like we’re on the same page.
It’s not actually more general, it’s instead about a somewhat different point. The more general statement could use some sort of a notion of relative actuality, to point at the possibly counterfactual world determined by the decision made in the world where the prediction was delivered, which is distinct from the even more counterfactual worlds where the prediction was delivered but the decision was different from what it would relative-actually be had the prediction been delivered, and from the worlds where the prediction was not delivered at all.
If the prediction is not actually delivered, then it only applies to that intermediately-counterfactual world and not to the more counterfactual alternatives where the prediction was still delivered or to the less counterfactual situation where the prediction is not delivered. Saying that the prediction applies to the world where it’s delivered is liable to be interpreted as including the more-counterfactual worlds, but it doesn’t have to apply there, it only applies to the relatively-actual world. So your original framing has a necessary part of saying this carefully that my framing didn’t include, replacing it with my framing discards this correct detail. The Oracle’s prediction only has to apply to the “relatively-actual” world where the prediction is delivered.
An Oracle’s prediction does not have to apply to worlds in which the Oracle does not ‘desire’ to retain its classification as an Oracle. Indeed, since an Oracle needs to take the effects of its predictions into account, one of the ways an Oracle might be implemented is that for each prediction it is considering making, it simulates a world where it makes that prediction to see whether it comes true. In which case there will be (simulated) worlds where a prediction is made within that world by (what appears to be) an Oracle, yet the prediction does not apply to the world where the prediction is delivered.
Or to put it another way, talk of “an Oracle” seems potentially confused, since the same entity may not be an Oracle in all the worlds under discussion.
Consider the variant where the Oracle demands a fee of 100 utilons after delivering the prediction, which you can’t refuse. Then the winning strategy is going to be about ensuring that the current situation is counterfactual, so that in actuality you won’t have to pay the Oracle’s fee, because the Oracle wouldn’t be able to deliver a correct prediction.
The Oracle’s prediction only has to apply to the world where the prediction is delivered. It doesn’t have to apply to the other worlds. The world where the prediction is delivered can be the world that is not, and another world can be the world that is.
“The Oracle’s prediction only has to apply to the world where the prediction is delivered”—My point was that predictions that are delivered in the factual don’t apply to counterfactuals, but the way you’ve framed it is better as it handles a more general set of cases. It seems like we’re on the same page.
It’s not actually more general, it’s instead about a somewhat different point. The more general statement could use some sort of a notion of relative actuality, to point at the possibly counterfactual world determined by the decision made in the world where the prediction was delivered, which is distinct from the even more counterfactual worlds where the prediction was delivered but the decision was different from what it would relative-actually be had the prediction been delivered, and from the worlds where the prediction was not delivered at all.
If the prediction is not actually delivered, then it only applies to that intermediately-counterfactual world and not to the more counterfactual alternatives where the prediction was still delivered or to the less counterfactual situation where the prediction is not delivered. Saying that the prediction applies to the world where it’s delivered is liable to be interpreted as including the more-counterfactual worlds, but it doesn’t have to apply there, it only applies to the relatively-actual world. So your original framing has a necessary part of saying this carefully that my framing didn’t include, replacing it with my framing discards this correct detail. The Oracle’s prediction only has to apply to the “relatively-actual” world where the prediction is delivered.
An Oracle’s prediction does not have to apply to worlds in which the Oracle does not ‘desire’ to retain its classification as an Oracle. Indeed, since an Oracle needs to take the effects of its predictions into account, one of the ways an Oracle might be implemented is that for each prediction it is considering making, it simulates a world where it makes that prediction to see whether it comes true. In which case there will be (simulated) worlds where a prediction is made within that world by (what appears to be) an Oracle, yet the prediction does not apply to the world where the prediction is delivered.
Or to put it another way, talk of “an Oracle” seems potentially confused, since the same entity may not be an Oracle in all the worlds under discussion.