g being continuous does not appear to actually help to resolve predictor problems, as a fixed point of 50%/50% left/right[1] is not excluded, and in this case Omega has no predictive power over the outcome A[2].
If you try to use this to resolve the Newcomb problem, for instance, you’ll find that an agent that simply flips a (quantum) coin to decide does not have a fixed point in f, and does have a fixed point in g, as expected, but said fixed point is 50%/50%… which means the Omega is wrong exactly half the time. You could replace the Omega with an inverted Omega or a fair coin. They all have the same predictive power over the outcome A - i.e. none.
Is there e.g. an additional quantum circuit trick that can resolve this? Or am I missing something?
You’re overcomplicating the problem. If Omega predicts even odds on two choices and then you always pick one you’ve determined in advance, it will be obvious that Omega is failing to predict your behavior correctly.
Imagine that you claim to be able to predict the probabilities of whether I will choose left or right, and then predict 50% for both. If I just choose “left” every time then obviously your predictions are bad—you’re “well calibrated” in the sense that 50% of your 50% predictions come true, but the model that both of my choices have equal probability will just be directly rejected.
In contrast, if this is actually a fixed point of g, I will choose left about half the time and right about half the time. There’s a big difference between those two cases and I can specify an explicit hypothesis test with p-values, etc. if you want, though it shouldn’t be difficult to come up with your own.
If Omega predicts even odds on two choices and then you always pick one you’ve determined in advance
(I can’t quite tell if this was intended to be an example of a different agent or if it was a misconstrual of the agent example I gave. I suspect the former but am not certain. If the former, ignore this.) To be clear: I meant an agent that flips a quantum coin to decide at the time of the choice. This is not determined, or determinable, in advance[1]. Omega can predict g here fairly easily, but not f.
There’s a big difference between those two cases
Absolutely. It’s the difference between ‘average’ predictive power over all agents and worst-case predictive power over a single agent, give or take, assuming I am understanding you correctly.
If Omega predicts even odds on two choices and then you always pick one you’ve determined in advance, it will be obvious that Omega is failing to predict your behavior correctly.
Ah. I think we are thinking of two different variants of the Newcomb problem. I would be interested in a more explicit definition of your Newcomb variant. The original Newcomb problem does not allow the box to contain odds, just a binary money/no money.
I agree that if Omega gives the agent the odds explicitly it’s fairly simple for an agent to contradict Omega.
I was assuming that Omega would treat the fixed point instead as a mixed strategy (in the game theory sense) - that is if Omega predicted, say, 70⁄30 was a fixed point, Omega would roll a d10[2] and 70% of the time put the money in.
This “works”, in the sense of satisfying the fixedpoint, and in this case results in Omega having… nowhere near perfect predictive power versus the agent, but some predictive power at least (58% correct, if I calculated it correctly; 0.72+0.32).
If the fixedpoint was 50⁄50 however, the fixedpoint is still satisfied by Omega putting in the money 50% of the time, but Omega is left with no predictive power over the outcome A.
It’s intended to be an example of a different agent. I don’t care much about the Newcomb problem setting since I think it’s not relevant in this context.
My point is that the fact that Omega can guess 50⁄50 in the scenario I set up in the post doesn’t allow it to actually have good performance and it’s easy to tell this by running any standard hypothesis test. So I don’t get how your Newcomb setup relates to my proposed setup in the post.
My point is that the fact that Omega can guess 50⁄50 in the scenario I set up in the post doesn’t allow it to actually have good performance
That… is precisely my point? See my conclusion:
If the fixedpoint was 50⁄50 however, the fixedpoint is still satisfied by Omega putting in the money 50% of the time, but Omega is left with no predictive power over the outcome A.
Then I don’t understand what you’re trying to say here. Can you explain exactly what you think the problem with my setup is?
I think you’re just not understanding what I’m trying to say. My point is that if Omega actually has no knowledge, then predicting 50⁄50 doesn’t allow him to be right. On the other hand, if 50⁄50 is actually a fixed point of g, then Omega predicting 50⁄50 will give him substantial predictive power over outcomes. For example, it will predict that my actions will be roughly split 50⁄50, when there’s no reason for this to be true if Omega has no information about me; I could just always pick an action I’ve predetermined in advance whenever I see a 50⁄50 prediction.
I have a tendency to start on a tangential point, get agreement, then show the implications for the main argument. In practice a whole lot more people are open to changing their minds in this way than directly. This may be somewhat less important on this forum than elsewhere.
You stated:
In contrast, I think almost all proponents of libertarian free will would agree that their position predicts that an agent with such free will, such as a human, could always just choose to not do as they are told. If the distribution they are given looks like it’s roughly uniform they can deterministically pick one action, and if it looks like it’s very far from uniform they can just make a choice uniformly at random. The crux is that the function g this defines can’t be continuous, so I believe this forces advocates of libertarian free will to the position that agents with free will must represent discontinuous input-output relations.
(Emphasis added.)
One corollary of your conclusion is that it would imply that a continuousgimplies a lack of free will. (A→B)→(¬B→¬A) - or in this case (free_will→¬g_continuous)→(g_continuous→¬free_will).
However, I’ve shown a case where a continuous g nevertheless results in Omega having zero predictive power over the agent:
If the fixedpoint was 50⁄50 however, the fixedpoint is still satisfied by Omega putting in the money 50% of the time, but Omega is left with no predictive power over the outcome A.
This then either means that:
Your argument is flawed.
My tangential point is flawed.
This logic chain is flawed.
You think that Omega having no predictive power over the outcome A is still incompatible with the agent having free will.
(If so, why?)
(Note phrasing: this is not stating that the agent must have free will, only that it’s not ruled out.)
I don’t know why you’re talking about the Newcomb problem again. I’ve already said I don’t see how that’s relevant. Can you tell me how, in my setup, the fixed point being 50⁄50 means the oracle has no predictive power over the agent?
If 50⁄50 is a fixed point then the agent clearly has predictive power, just like we have predictive power over what happens if you measure a qubit in the state (|0⟩+|1⟩)/√2. 50% doesn’t imply “lack of predictive power”.
g being continuous does not appear to actually help to resolve predictor problems, as a fixed point of 50%/50% left/right[1] is not excluded, and in this case Omega has no predictive power over the outcome A[2].
If you try to use this to resolve the Newcomb problem, for instance, you’ll find that an agent that simply flips a (quantum) coin to decide does not have a fixed point in f, and does have a fixed point in g, as expected, but said fixed point is 50%/50%… which means the Omega is wrong exactly half the time. You could replace the Omega with an inverted Omega or a fair coin. They all have the same predictive power over the outcome A - i.e. none.
Is there e.g. an additional quantum circuit trick that can resolve this? Or am I missing something?
Or one-box/two-box, etc.
For the same reason that a fair coin has no predictive power over a (different) fair coin.
You’re overcomplicating the problem. If Omega predicts even odds on two choices and then you always pick one you’ve determined in advance, it will be obvious that Omega is failing to predict your behavior correctly.
Imagine that you claim to be able to predict the probabilities of whether I will choose left or right, and then predict 50% for both. If I just choose “left” every time then obviously your predictions are bad—you’re “well calibrated” in the sense that 50% of your 50% predictions come true, but the model that both of my choices have equal probability will just be directly rejected.
In contrast, if this is actually a fixed point of g, I will choose left about half the time and right about half the time. There’s a big difference between those two cases and I can specify an explicit hypothesis test with p-values, etc. if you want, though it shouldn’t be difficult to come up with your own.
(I can’t quite tell if this was intended to be an example of a different agent or if it was a misconstrual of the agent example I gave. I suspect the former but am not certain. If the former, ignore this.) To be clear: I meant an agent that flips a quantum coin to decide at the time of the choice. This is not determined, or determinable, in advance[1]. Omega can predict g here fairly easily, but not f.
Absolutely. It’s the difference between ‘average’ predictive power over all agents and worst-case predictive power over a single agent, give or take, assuming I am understanding you correctly.
Ah. I think we are thinking of two different variants of the Newcomb problem. I would be interested in a more explicit definition of your Newcomb variant. The original Newcomb problem does not allow the box to contain odds, just a binary money/no money.
I agree that if Omega gives the agent the odds explicitly it’s fairly simple for an agent to contradict Omega.
I was assuming that Omega would treat the fixed point instead as a mixed strategy (in the game theory sense) - that is if Omega predicted, say, 70⁄30 was a fixed point, Omega would roll a d10[2] and 70% of the time put the money in.
This “works”, in the sense of satisfying the fixedpoint, and in this case results in Omega having… nowhere near perfect predictive power versus the agent, but some predictive power at least (58% correct, if I calculated it correctly; 0.72+0.32).
If the fixedpoint was 50⁄50 however, the fixedpoint is still satisfied by Omega putting in the money 50% of the time, but Omega is left with no predictive power over the outcome A.
To the best of our knowledge, anyway.
Read ‘private random oracle’.
It’s intended to be an example of a different agent. I don’t care much about the Newcomb problem setting since I think it’s not relevant in this context.
My point is that the fact that Omega can guess 50⁄50 in the scenario I set up in the post doesn’t allow it to actually have good performance and it’s easy to tell this by running any standard hypothesis test. So I don’t get how your Newcomb setup relates to my proposed setup in the post.
That… is precisely my point? See my conclusion:
(Emphasis added.)
Then I don’t understand what you’re trying to say here. Can you explain exactly what you think the problem with my setup is?
I think you’re just not understanding what I’m trying to say. My point is that if Omega actually has no knowledge, then predicting 50⁄50 doesn’t allow him to be right. On the other hand, if 50⁄50 is actually a fixed point of g, then Omega predicting 50⁄50 will give him substantial predictive power over outcomes. For example, it will predict that my actions will be roughly split 50⁄50, when there’s no reason for this to be true if Omega has no information about me; I could just always pick an action I’ve predetermined in advance whenever I see a 50⁄50 prediction.
Alright, we are in agreement on this point.
I have a tendency to start on a tangential point, get agreement, then show the implications for the main argument. In practice a whole lot more people are open to changing their minds in this way than directly. This may be somewhat less important on this forum than elsewhere.
You stated:
(Emphasis added.)
One corollary of your conclusion is that it would imply that a continuous g implies a lack of free will. (A→B)→(¬B→¬A) - or in this case (free_will→¬g_continuous)→(g_continuous→¬free_will).
However, I’ve shown a case where a continuous g nevertheless results in Omega having zero predictive power over the agent:
This then either means that:
Your argument is flawed.
My tangential point is flawed.
This logic chain is flawed.
You think that Omega having no predictive power over the outcome A is still incompatible with the agent having free will.
(If so, why?)
(Note phrasing: this is not stating that the agent must have free will, only that it’s not ruled out.)
I don’t know why you’re talking about the Newcomb problem again. I’ve already said I don’t see how that’s relevant. Can you tell me how, in my setup, the fixed point being 50⁄50 means the oracle has no predictive power over the agent?
If 50⁄50 is a fixed point then the agent clearly has predictive power, just like we have predictive power over what happens if you measure a qubit in the state (|0⟩+|1⟩)/√2. 50% doesn’t imply “lack of predictive power”.