Let us assume a repeated game where an agent is presented with a decision between A and B, and Omega observes that the agent chooses A in 80% and B in 20% of the cases.
If Omega now predicts the agent to choose A in the next instance of the game, then the probability of the prediction being correct is 80% - from Omega’s perspective as long as the roll hasn’t been made, and from the agent’s perspective as long as no decision has been made. However, once the decision has been made, the probability of the prediction being correct from the perspective of the agent is either 100% (A) or 0% (B).
If, instead, Omega is a ten-sided die with 8 A-sides and 2 B-sides, then the probability of the prediction being correct is 68% - from Omega’s perspective, and from the agent’s perspective as long as no decision has been made. However, once the decision has been made, the probability of the prediction being correct from the perspective of the agent is either 80% (A) or 20% (B).
If the agent knows that Omega makes the prediction before the agent makes the decision, then the agent cannot make different decisions without affecting the probability of the prediction being correct, unless Omega’s prediction is a coin toss (p=0.5).
The only case where the probability of Omega being correct is unchangeable with p≠0.5 is the case where the agent cannot make different decisions, which I call “no free will”.
You are using the wrong sense of “can” in “cannot make different decisions”. The every day subjective experience of “free will” isn’t caused by your decisions being indeterminate in an objective sense, that’s the incoherent concept of libertarian free will. Instead it seems to be based on our decisions being dependent on some sort of internal preference calculation, and the correct sense of “can make different decisions” to use is something like “if the preference calculation had a different outcome that would result in a different decision”.
Otherwise results that are entirely random would feel more free than results that are based on your values, habits, likes, memories and other character traits, i. e. the things that make you you. Not at all coincidentally this is also the criterion whether it makes sense to bother thinking about the decision.
You yourself don’t know the result of the preference calculation before you run it, otherwise it wouldn’t feel like a free decision. But whether Omega knows the result in advance has no impact on that at all.
Let us assume a repeated game where an agent is presented with a decision between A and B, and Omega observes that the agent chooses A in 80% and B in 20% of the cases.
If Omega now predicts the agent to choose A in the next instance of the game, then the probability of the prediction being correct is 80% - from Omega’s perspective as long as the roll hasn’t been made, and from the agent’s perspective as long as no decision has been made. However, once the decision has been made, the probability of the prediction being correct from the perspective of the agent is either 100% (A) or 0% (B).
If, instead, Omega is a ten-sided die with 8 A-sides and 2 B-sides, then the probability of the prediction being correct is 68% - from Omega’s perspective, and from the agent’s perspective as long as no decision has been made. However, once the decision has been made, the probability of the prediction being correct from the perspective of the agent is either 80% (A) or 20% (B).
If the agent knows that Omega makes the prediction before the agent makes the decision, then the agent cannot make different decisions without affecting the probability of the prediction being correct, unless Omega’s prediction is a coin toss (p=0.5).
The only case where the probability of Omega being correct is unchangeable with p≠0.5 is the case where the agent cannot make different decisions, which I call “no free will”.
You are using the wrong sense of “can” in “cannot make different decisions”. The every day subjective experience of “free will” isn’t caused by your decisions being indeterminate in an objective sense, that’s the incoherent concept of libertarian free will. Instead it seems to be based on our decisions being dependent on some sort of internal preference calculation, and the correct sense of “can make different decisions” to use is something like “if the preference calculation had a different outcome that would result in a different decision”.
Otherwise results that are entirely random would feel more free than results that are based on your values, habits, likes, memories and other character traits, i. e. the things that make you you. Not at all coincidentally this is also the criterion whether it makes sense to bother thinking about the decision.
You yourself don’t know the result of the preference calculation before you run it, otherwise it wouldn’t feel like a free decision. But whether Omega knows the result in advance has no impact on that at all.