In pragmatic terms, we know that true two-boxers will willingly take on arbitrarily large disutility
This is only the case in a world-view that accepts that Omega cannot be tricked. How do you know Omega cannot be tricked? This view corresponds to a certain view of how choices get made, how the choice making algorithm is simulated, and various properties of this simulation as embodied in physical reality. Absent an actual proof, this view is just that—a view.
Two-boxers aren’t (necessarily!) stupid, they simply adhere to commitments that make it possible to fool Omega.
Two-boxers aren’t (necessarily!) stupid, they simply adhere to commitments that make it possible to fool Omega.
No, they don’t. You seem to be confused not just about Newcomb’s Problem but also about why the (somewhat educated subset of) people who Two-Box make that choice. They emphatically do not do it because they believe they are able to fool Omega. They expect to lose (ie. not get the $1,000,000).
This is only the case in a world-view that accepts that Omega cannot be tricked. How do you know Omega cannot be tricked?
By hypothesis, this is how it works. Omega can predict your choice with >0.5 accuracy (strictly more than half the time). Regardless of Free Will or Word of God or trickery or Magic.
The whole point of the thought experiment is to analyze a choice under some circumstances where the choice causes the outcomes to have been laid out differently.
If you fight the hypothesis by asserting that some other worldviews grant players Magical Powers From The Beyond to deceive Omega (who is just a mental tool for the thought experiment), then I can freely assert that Omega has Magical Powers From The Outer Further Away Beyond that can neutralize those lesser powers or predict them altogether. Or maybe Omega just has a time machine. Or maybe Omega just fucking can, don’t fight the premises damnit!
And as wedrifid pointed out, this is not even the main reason why the smarter two-boxers two-box. It’s certainly one of the common reasons why the less-smart ones do though, in my experience. (Since they never read the Sequences, aren’t scientists, and never learned to not fight the premises! Ahem.)
This is only the case in a world-view that accepts that Omega cannot be tricked. How do you know Omega cannot be tricked? This view corresponds to a certain view of how choices get made, how the choice making algorithm is simulated, and various properties of this simulation as embodied in physical reality. Absent an actual proof, this view is just that—a view.
Two-boxers aren’t (necessarily!) stupid, they simply adhere to commitments that make it possible to fool Omega.
No, they don’t. You seem to be confused not just about Newcomb’s Problem but also about why the (somewhat educated subset of) people who Two-Box make that choice. They emphatically do not do it because they believe they are able to fool Omega. They expect to lose (ie. not get the $1,000,000).
By hypothesis, this is how it works. Omega can predict your choice with >0.5 accuracy (strictly more than half the time). Regardless of Free Will or Word of God or trickery or Magic.
The whole point of the thought experiment is to analyze a choice under some circumstances where the choice causes the outcomes to have been laid out differently.
If you fight the hypothesis by asserting that some other worldviews grant players Magical Powers From The Beyond to deceive Omega (who is just a mental tool for the thought experiment), then I can freely assert that Omega has Magical Powers From The Outer Further Away Beyond that can neutralize those lesser powers or predict them altogether. Or maybe Omega just has a time machine. Or maybe Omega just fucking can, don’t fight the premises damnit!
And as wedrifid pointed out, this is not even the main reason why the smarter two-boxers two-box. It’s certainly one of the common reasons why the less-smart ones do though, in my experience. (Since they never read the Sequences, aren’t scientists, and never learned to not fight the premises! Ahem.)