For the curious, you should be indifferent to one- or two-boxing when Omega predicts your response 50.05% of the time. If Omega is just perceptibly better than chance, one-boxing is still the way to go.
Now I wonder how good humans are at playing Omega.
Better than 50.5% accuracy actually doesn’t sound that implausible, but I will note that if Omega is probabilistic then the way in which it is probabilistic affects the answer. E.g., if Omega works by asking people what they will do and then believing them, this may well get better than chance results with humans, at least some of whom are honest. However, the correct response in this version of the problem is to two-box and lie.
Better than 50.5% accuracy actually doesn’t sound that implausible, but I will note that if Omega is probabilistic then the way in which it is probabilistic affects the answer.
Sure, I was reading the 50.05% in terms of probability, not frequency, though I stated it the other way. If you have information about where his predictions are coming from, that will change your probability for his prediction.
For the curious, you should be indifferent to one- or two-boxing when Omega predicts your response 50.05% of the time. If Omega is just perceptibly better than chance, one-boxing is still the way to go.
Now I wonder how good humans are at playing Omega.
Better than 50.5% accuracy actually doesn’t sound that implausible, but I will note that if Omega is probabilistic then the way in which it is probabilistic affects the answer. E.g., if Omega works by asking people what they will do and then believing them, this may well get better than chance results with humans, at least some of whom are honest. However, the correct response in this version of the problem is to two-box and lie.
Sure, I was reading the 50.05% in terms of probability, not frequency, though I stated it the other way. If you have information about where his predictions are coming from, that will change your probability for his prediction.
Fair point, your’re right.
… and if your utility scales linearly with money up to $1,001,000, right?
Yes, that sort of thing was addressed in the parenthetical in the grandparent. It doesn’t specifically have to scale linearly.
Or if the payoffs are reduced to fall within the (approximately) linear region.
But if they are too low (say, $1.00 and $0.01) I might do things other than what gets me more money Just For The Hell Of It.
And thus was the first zero-boxer born.
Zero-boxer: “Fuck you, Omega. I won’t be your puppet!”
Omega: “Keikaku doori...”