Predictions reify abstract ideas into actionable/observable judgements. A prediction of a hypothetical future lets you act depending on what happens in that future, thus putting the probability or possibility of hypothetical future situations in dependence from their content. For a halting problem, where we have no notion of preference, this lets us deny possibility of hypotheticals by directing the future away from predictions made about them.
Concrete observable events that take place in a hypothetical future are seen as abstract ideas when thought about from the past, when that future is not sure to ever take place. But similarly, a normative action in some situation is an abstract idea about that situation. So with the same device, we can build thought experiments where that abstract idea is made manifest as a concrete observable event, an oracle’s pronouncement. And then ask how to respond to the presence of this reification of a property of a future in the environment, when deciding that future.
naive free will is a wrong model, and the decision is already made
If free will is about what an agent should do, but the prediction is about what an agent would do, there is no contradiction from these making different claims. If by construction what an agent would do is set to follow what an agent should do, these can’t be different. If these are still different, then it’s not the case that we arranged the action to be by construction the same as it should be.
Usually this tension can be resolved by introducing more possible situations, where in some of the situations the action is still as it should be, and some of the situations take place in actuality, but maybe none of the situations that take place in actuality also have the action agree with how it should be. Free will feels OK as an informal description of framings like this, referring to how actions should be.
But what I’m talking about here is a setting where the normative action (one that should be taken) doesn’t necessarily take place in any “possible” hypothetical version of the situation, and it’s still announced in advance by an oracle as the normative action for that situation. That action might for example only be part of some “impossible” hypothetical versions of the situation, needed to talk about normative correctness of the action (but not necessarily needed to talk about how the action would be taken in response to the oracle’s pronouncement).
I’m not super-Kantian, but “ought implies can” seems pretty strong to me. If there is a correct prediction, the agent CANNOT invalidate it, and therefore talk of whether it should do so is meaningless (to me, at least. I am open to the idea that I really don’t understand how decisions interact with causality in the first place).
still announced in advance by an oracle as the normative action for that situation.
I don’t think I’ve seen that in the setup of these thought experiments. So far as I’ve seen, Omega or the mugger conditionally acts on a prediction of action, not on a normative declaration of counterfactual.
Predictions reify abstract ideas into actionable/observable judgements. A prediction of a hypothetical future lets you act depending on what happens in that future, thus putting the probability or possibility of hypothetical future situations in dependence from their content. For a halting problem, where we have no notion of preference, this lets us deny possibility of hypotheticals by directing the future away from predictions made about them.
Concrete observable events that take place in a hypothetical future are seen as abstract ideas when thought about from the past, when that future is not sure to ever take place. But similarly, a normative action in some situation is an abstract idea about that situation. So with the same device, we can build thought experiments where that abstract idea is made manifest as a concrete observable event, an oracle’s pronouncement. And then ask how to respond to the presence of this reification of a property of a future in the environment, when deciding that future.
If free will is about what an agent should do, but the prediction is about what an agent would do, there is no contradiction from these making different claims. If by construction what an agent would do is set to follow what an agent should do, these can’t be different. If these are still different, then it’s not the case that we arranged the action to be by construction the same as it should be.
Usually this tension can be resolved by introducing more possible situations, where in some of the situations the action is still as it should be, and some of the situations take place in actuality, but maybe none of the situations that take place in actuality also have the action agree with how it should be. Free will feels OK as an informal description of framings like this, referring to how actions should be.
But what I’m talking about here is a setting where the normative action (one that should be taken) doesn’t necessarily take place in any “possible” hypothetical version of the situation, and it’s still announced in advance by an oracle as the normative action for that situation. That action might for example only be part of some “impossible” hypothetical versions of the situation, needed to talk about normative correctness of the action (but not necessarily needed to talk about how the action would be taken in response to the oracle’s pronouncement).
I’m not super-Kantian, but “ought implies can” seems pretty strong to me. If there is a correct prediction, the agent CANNOT invalidate it, and therefore talk of whether it should do so is meaningless (to me, at least. I am open to the idea that I really don’t understand how decisions interact with causality in the first place).
I don’t think I’ve seen that in the setup of these thought experiments. So far as I’ve seen, Omega or the mugger conditionally acts on a prediction of action, not on a normative declaration of counterfactual.