Just wanted to point out a basic error constantly made here: in a world with perfect predictors your decisions are an illusion, so the question “Should she wire $1,000 to the blackmailer?” is a category error. A meaningful question is “what kind of an agent is she?” She has no real choice in the matter, though she can discover what kind of an agent she is by observing what actions she ends up taking.
Absolutely agreed. This problem set-up is broken for exactly this reason.
There is no meaningful question about whether you “should” pay in this scenario, no matter what decision theory you are considering. You either will pay, or you are not in this scenario. This makes almost all of the considerations being discussed irrelevant.
This is why most of the set-ups for such problems talk about “almost perfect” predictors, but then you have to determine exactly how the actual decision is correlated with the prediction, because it makes a great deal of difference to many decision theories.
Actually, this limit, from fairly decent to really good to almost perfect to perfect is non-singular, because decisions are still an intentional stance, not anything fundamental. The question “what kind of an agent is she?” is still the correct one, only the worlds are probabilistic (in the predictor’s mind, at least), not deterministic. There is the added complication of other-modeling between the agent and the predictor, but it can be accounted for without the concept of a decision, only action.
An unfortunate limitation of these framings is that predictors tend to predict how an agent would act, and not how an agent should act (in a particular sense). But both are abstract properties of the same situation, and both should be possible to predict.
These scenarios generally point out the paradox that there is a difference between what the agent would do (with a given viewpoint and decision model) and what it should do (with an outside view of the decision).
The whole point is that a perfect (or near-perfect) predictor IMPLIES that naive free will is a wrong model, and the decision is already made.
Predictions reify abstract ideas into actionable/observable judgements. A prediction of a hypothetical future lets you act depending on what happens in that future, thus putting the probability or possibility of hypothetical future situations in dependence from their content. For a halting problem, where we have no notion of preference, this lets us deny possibility of hypotheticals by directing the future away from predictions made about them.
Concrete observable events that take place in a hypothetical future are seen as abstract ideas when thought about from the past, when that future is not sure to ever take place. But similarly, a normative action in some situation is an abstract idea about that situation. So with the same device, we can build thought experiments where that abstract idea is made manifest as a concrete observable event, an oracle’s pronouncement. And then ask how to respond to the presence of this reification of a property of a future in the environment, when deciding that future.
naive free will is a wrong model, and the decision is already made
If free will is about what an agent should do, but the prediction is about what an agent would do, there is no contradiction from these making different claims. If by construction what an agent would do is set to follow what an agent should do, these can’t be different. If these are still different, then it’s not the case that we arranged the action to be by construction the same as it should be.
Usually this tension can be resolved by introducing more possible situations, where in some of the situations the action is still as it should be, and some of the situations take place in actuality, but maybe none of the situations that take place in actuality also have the action agree with how it should be. Free will feels OK as an informal description of framings like this, referring to how actions should be.
But what I’m talking about here is a setting where the normative action (one that should be taken) doesn’t necessarily take place in any “possible” hypothetical version of the situation, and it’s still announced in advance by an oracle as the normative action for that situation. That action might for example only be part of some “impossible” hypothetical versions of the situation, needed to talk about normative correctness of the action (but not necessarily needed to talk about how the action would be taken in response to the oracle’s pronouncement).
I’m not super-Kantian, but “ought implies can” seems pretty strong to me. If there is a correct prediction, the agent CANNOT invalidate it, and therefore talk of whether it should do so is meaningless (to me, at least. I am open to the idea that I really don’t understand how decisions interact with causality in the first place).
still announced in advance by an oracle as the normative action for that situation.
I don’t think I’ve seen that in the setup of these thought experiments. So far as I’ve seen, Omega or the mugger conditionally acts on a prediction of action, not on a normative declaration of counterfactual.
Just wanted to point out a basic error constantly made here: in a world with perfect predictors your decisions are an illusion, so the question “Should she wire $1,000 to the blackmailer?” is a category error. A meaningful question is “what kind of an agent is she?” She has no real choice in the matter, though she can discover what kind of an agent she is by observing what actions she ends up taking.
Absolutely agreed. This problem set-up is broken for exactly this reason.
There is no meaningful question about whether you “should” pay in this scenario, no matter what decision theory you are considering. You either will pay, or you are not in this scenario. This makes almost all of the considerations being discussed irrelevant.
This is why most of the set-ups for such problems talk about “almost perfect” predictors, but then you have to determine exactly how the actual decision is correlated with the prediction, because it makes a great deal of difference to many decision theories.
Actually, this limit, from fairly decent to really good to almost perfect to perfect is non-singular, because decisions are still an intentional stance, not anything fundamental. The question “what kind of an agent is she?” is still the correct one, only the worlds are probabilistic (in the predictor’s mind, at least), not deterministic. There is the added complication of other-modeling between the agent and the predictor, but it can be accounted for without the concept of a decision, only action.
An unfortunate limitation of these framings is that predictors tend to predict how an agent would act, and not how an agent should act (in a particular sense). But both are abstract properties of the same situation, and both should be possible to predict.
These scenarios generally point out the paradox that there is a difference between what the agent would do (with a given viewpoint and decision model) and what it should do (with an outside view of the decision).
The whole point is that a perfect (or near-perfect) predictor IMPLIES that naive free will is a wrong model, and the decision is already made.
Predictions reify abstract ideas into actionable/observable judgements. A prediction of a hypothetical future lets you act depending on what happens in that future, thus putting the probability or possibility of hypothetical future situations in dependence from their content. For a halting problem, where we have no notion of preference, this lets us deny possibility of hypotheticals by directing the future away from predictions made about them.
Concrete observable events that take place in a hypothetical future are seen as abstract ideas when thought about from the past, when that future is not sure to ever take place. But similarly, a normative action in some situation is an abstract idea about that situation. So with the same device, we can build thought experiments where that abstract idea is made manifest as a concrete observable event, an oracle’s pronouncement. And then ask how to respond to the presence of this reification of a property of a future in the environment, when deciding that future.
If free will is about what an agent should do, but the prediction is about what an agent would do, there is no contradiction from these making different claims. If by construction what an agent would do is set to follow what an agent should do, these can’t be different. If these are still different, then it’s not the case that we arranged the action to be by construction the same as it should be.
Usually this tension can be resolved by introducing more possible situations, where in some of the situations the action is still as it should be, and some of the situations take place in actuality, but maybe none of the situations that take place in actuality also have the action agree with how it should be. Free will feels OK as an informal description of framings like this, referring to how actions should be.
But what I’m talking about here is a setting where the normative action (one that should be taken) doesn’t necessarily take place in any “possible” hypothetical version of the situation, and it’s still announced in advance by an oracle as the normative action for that situation. That action might for example only be part of some “impossible” hypothetical versions of the situation, needed to talk about normative correctness of the action (but not necessarily needed to talk about how the action would be taken in response to the oracle’s pronouncement).
I’m not super-Kantian, but “ought implies can” seems pretty strong to me. If there is a correct prediction, the agent CANNOT invalidate it, and therefore talk of whether it should do so is meaningless (to me, at least. I am open to the idea that I really don’t understand how decisions interact with causality in the first place).
I don’t think I’ve seen that in the setup of these thought experiments. So far as I’ve seen, Omega or the mugger conditionally acts on a prediction of action, not on a normative declaration of counterfactual.