My point is that the fact that Omega can guess 50⁄50 in the scenario I set up in the post doesn’t allow it to actually have good performance
That… is precisely my point? See my conclusion:
If the fixedpoint was 50⁄50 however, the fixedpoint is still satisfied by Omega putting in the money 50% of the time, but Omega is left with no predictive power over the outcome A.
Then I don’t understand what you’re trying to say here. Can you explain exactly what you think the problem with my setup is?
I think you’re just not understanding what I’m trying to say. My point is that if Omega actually has no knowledge, then predicting 50⁄50 doesn’t allow him to be right. On the other hand, if 50⁄50 is actually a fixed point of g, then Omega predicting 50⁄50 will give him substantial predictive power over outcomes. For example, it will predict that my actions will be roughly split 50⁄50, when there’s no reason for this to be true if Omega has no information about me; I could just always pick an action I’ve predetermined in advance whenever I see a 50⁄50 prediction.
I have a tendency to start on a tangential point, get agreement, then show the implications for the main argument. In practice a whole lot more people are open to changing their minds in this way than directly. This may be somewhat less important on this forum than elsewhere.
You stated:
In contrast, I think almost all proponents of libertarian free will would agree that their position predicts that an agent with such free will, such as a human, could always just choose to not do as they are told. If the distribution they are given looks like it’s roughly uniform they can deterministically pick one action, and if it looks like it’s very far from uniform they can just make a choice uniformly at random. The crux is that the function g this defines can’t be continuous, so I believe this forces advocates of libertarian free will to the position that agents with free will must represent discontinuous input-output relations.
(Emphasis added.)
One corollary of your conclusion is that it would imply that a continuousgimplies a lack of free will. (A→B)→(¬B→¬A) - or in this case (free_will→¬g_continuous)→(g_continuous→¬free_will).
However, I’ve shown a case where a continuous g nevertheless results in Omega having zero predictive power over the agent:
If the fixedpoint was 50⁄50 however, the fixedpoint is still satisfied by Omega putting in the money 50% of the time, but Omega is left with no predictive power over the outcome A.
This then either means that:
Your argument is flawed.
My tangential point is flawed.
This logic chain is flawed.
You think that Omega having no predictive power over the outcome A is still incompatible with the agent having free will.
(If so, why?)
(Note phrasing: this is not stating that the agent must have free will, only that it’s not ruled out.)
I don’t know why you’re talking about the Newcomb problem again. I’ve already said I don’t see how that’s relevant. Can you tell me how, in my setup, the fixed point being 50⁄50 means the oracle has no predictive power over the agent?
If 50⁄50 is a fixed point then the agent clearly has predictive power, just like we have predictive power over what happens if you measure a qubit in the state (|0⟩+|1⟩)/√2. 50% doesn’t imply “lack of predictive power”.
That… is precisely my point? See my conclusion:
(Emphasis added.)
Then I don’t understand what you’re trying to say here. Can you explain exactly what you think the problem with my setup is?
I think you’re just not understanding what I’m trying to say. My point is that if Omega actually has no knowledge, then predicting 50⁄50 doesn’t allow him to be right. On the other hand, if 50⁄50 is actually a fixed point of g, then Omega predicting 50⁄50 will give him substantial predictive power over outcomes. For example, it will predict that my actions will be roughly split 50⁄50, when there’s no reason for this to be true if Omega has no information about me; I could just always pick an action I’ve predetermined in advance whenever I see a 50⁄50 prediction.
Alright, we are in agreement on this point.
I have a tendency to start on a tangential point, get agreement, then show the implications for the main argument. In practice a whole lot more people are open to changing their minds in this way than directly. This may be somewhat less important on this forum than elsewhere.
You stated:
(Emphasis added.)
One corollary of your conclusion is that it would imply that a continuous g implies a lack of free will. (A→B)→(¬B→¬A) - or in this case (free_will→¬g_continuous)→(g_continuous→¬free_will).
However, I’ve shown a case where a continuous g nevertheless results in Omega having zero predictive power over the agent:
This then either means that:
Your argument is flawed.
My tangential point is flawed.
This logic chain is flawed.
You think that Omega having no predictive power over the outcome A is still incompatible with the agent having free will.
(If so, why?)
(Note phrasing: this is not stating that the agent must have free will, only that it’s not ruled out.)
I don’t know why you’re talking about the Newcomb problem again. I’ve already said I don’t see how that’s relevant. Can you tell me how, in my setup, the fixed point being 50⁄50 means the oracle has no predictive power over the agent?
If 50⁄50 is a fixed point then the agent clearly has predictive power, just like we have predictive power over what happens if you measure a qubit in the state (|0⟩+|1⟩)/√2. 50% doesn’t imply “lack of predictive power”.