To the extent that Newcomb’s Problem is ‘about how you view free will’ people who two box on Newcomb’s
Problem are confused about free will.
If you say so. If I learn enough about “choshi dori” to fool the punch-avoiding algorithm and win 1000 dollars, and you don’t play, who is confused? Rationalists are supposed to win, remember, not stick to a particular view of a problem.
If you say so. If I learn enough about “choshi dori” to fool the punch-avoiding algorithm and win 1000 dollars, and you don’t play, who is confused? Rationalists are supposed to win, remember, not stick to a particular view of a problem.
Rational agents who play Newcomb’s Problem one box. Rational agents who are in entirely different circumstances make entirely different decisions as determined by said circumstances. They also tend to have a rudimentary capability of noticing the difference between problems.
(a) You are being a dick. I certainly did not insult anyone in this thread.
(b) The isomorphism is exact. The point is granularity. If the guy can avoid the punch 90% of the time (or more precisely guess what your punch decision algorithm will do in response to some inputs 90% of the time), and Omega guesses what you will do correctly 90% of the time, that ought to be sufficient to do the math on expected values, if you want to leave it there.
Or, alternatively, you can try to “open up the agent you are playing against” and try to trick it. It’s certainly possible in the punching game. It may or may not be possible in the game with Omega—the problem doesn’t specify.
If you say “well, rational people do X and not Y, end of story” that’s fine. I am going to make my updates on you and move on.
A typical example of irrational behavior is intransitive preference. As the money pump thread shows people often don’t actually fall for money pumping, even if they have intransitive preferences. In other words, the map doesn’t fully reflect the territory of what people actually do.
Another example is gwern’s example with correlation and causation. Correlation does not imply causation, says gwern, but if we knew how often it does imply it, we may well be rational to conclude the latter from the former if the odds are good enough. He’s right—but no one does this (I don’t think!).
I used the example of the punching game on purpose—it makes the theoretical situation with Omega practical, as in you can go and try this game if you wanted. My response to trying the game was to learn how it works, rather than give up playing it. This is what people actually do. If your model doesn’t capture it, it’s not a good model.
A broader comment: I do math for a living. The issues of applicability of math to practical problems, and changing math models around is something I think about quite a bit.
It took a non-trivial exertion in the direction of politeness to refrain from answering the rhetorical question “who is confused?” with a literal answer.
I certainly did not insult anyone in this thread.
Arguable. I would concede at least that you did not say anything insulting that you do not sincerely believe is warranted.
(b) The isomorphism is exact. The point is granularity. If the guy can avoid the punch 90% of the time (or more precisely guess what your punch decision algorithm will do in response to some inputs 90% of the time), and Omega guesses what you will do correctly 90% of the time, that ought to be sufficient to do the math on expected values, if you want to leave it there.
Doing expected value calculations on probabilistic variants of newcomb’s problem is also old news. And results in one boxing unless the probability gets quite close to random guessing. Once again, if you choose a sufficiently different problem than Newcomb’s (such as by choosing an accuracy sufficiently close to 0.5, reducing the payoff ratio or by positing that you are in fact more intelligent than Omega) then you have failed to respond to a relevant question (or an interesting question, for that matter).
If you say “well, rational people do X and not Y, end of story” that’s fine. I am going to make my updates on you and move on.
Please do. I have likewise updated. Evidence suggests you are ill suited to considering counterfactual problems and unlikely to learn. My only recourse here is to minimize the damage you can do to the local sanity waterline. I’ll leave further attempts at verbal interaction to the half a dozen others who have been attempting to educate you, assuming they have more patience than I.
A broader comment: I do math for a living. The issues of applicability of math to practical problems, and changing math models around is something I think about quite a bit.
If you say so. If I learn enough about “choshi dori” to fool the punch-avoiding algorithm and win 1000 dollars, and you don’t play, who is confused? Rationalists are supposed to win, remember, not stick to a particular view of a problem.
Rational agents who play Newcomb’s Problem one box. Rational agents who are in entirely different circumstances make entirely different decisions as determined by said circumstances. They also tend to have a rudimentary capability of noticing the difference between problems.
(a) You are being a dick. I certainly did not insult anyone in this thread.
(b) The isomorphism is exact. The point is granularity. If the guy can avoid the punch 90% of the time (or more precisely guess what your punch decision algorithm will do in response to some inputs 90% of the time), and Omega guesses what you will do correctly 90% of the time, that ought to be sufficient to do the math on expected values, if you want to leave it there.
Or, alternatively, you can try to “open up the agent you are playing against” and try to trick it. It’s certainly possible in the punching game. It may or may not be possible in the game with Omega—the problem doesn’t specify.
If you say “well, rational people do X and not Y, end of story” that’s fine. I am going to make my updates on you and move on.
A typical example of irrational behavior is intransitive preference. As the money pump thread shows people often don’t actually fall for money pumping, even if they have intransitive preferences. In other words, the map doesn’t fully reflect the territory of what people actually do.
Another example is gwern’s example with correlation and causation. Correlation does not imply causation, says gwern, but if we knew how often it does imply it, we may well be rational to conclude the latter from the former if the odds are good enough. He’s right—but no one does this (I don’t think!).
I used the example of the punching game on purpose—it makes the theoretical situation with Omega practical, as in you can go and try this game if you wanted. My response to trying the game was to learn how it works, rather than give up playing it. This is what people actually do. If your model doesn’t capture it, it’s not a good model.
A broader comment: I do math for a living. The issues of applicability of math to practical problems, and changing math models around is something I think about quite a bit.
It took a non-trivial exertion in the direction of politeness to refrain from answering the rhetorical question “who is confused?” with a literal answer.
Arguable. I would concede at least that you did not say anything insulting that you do not sincerely believe is warranted.
Doing expected value calculations on probabilistic variants of newcomb’s problem is also old news. And results in one boxing unless the probability gets quite close to random guessing. Once again, if you choose a sufficiently different problem than Newcomb’s (such as by choosing an accuracy sufficiently close to 0.5, reducing the payoff ratio or by positing that you are in fact more intelligent than Omega) then you have failed to respond to a relevant question (or an interesting question, for that matter).
Please do. I have likewise updated. Evidence suggests you are ill suited to considering counterfactual problems and unlikely to learn. My only recourse here is to minimize the damage you can do to the local sanity waterline. I’ll leave further attempts at verbal interaction to the half a dozen others who have been attempting to educate you, assuming they have more patience than I.
See.