As I understand it, perfect rationality in this scenario requires we assume some Bayesian prior over all possible implementations of Omega and do a ton of computation for each case. For example, some Omegas could be type 3 and deceivable with non-zero probability; we have to determine how. If we know which implementation we’re up against, the calculations are a little easier, e.g. in the “simulating Omega” case we just one-box without thinking.
By that definition of “perfect rationality” no two perfect rationalists can exist in the same universe, or any material universe in which the amount of elapsed time before a decision is always finite.
Some assumptions allow you to play some games rationally with finite resources, like in the last sentence of my previous comment. Unfortunately we aren’t given any such assumptions in Newcomb’s, so I fell back to the decision procedure recommended by you: Solomonoff induction. Don’t like it? Give me a workable model of Omega.
Yes, it’s true. Perfectly playing any non-mathematical “real world” game (the formulation Vladimir Nesov insists on) requires great powers. If you can translate the game into maths to make it solvable, please do.
You are reasoning from the faulty assumption that “surely it’s possible to formalize the problem somehow and do something”. The problem statement is self-contradictory. We need to resolve the contradiction. It’s only possible by making some part of the problem statement false. That’s what the prior over Omegas is for. We’ve been told some bullshit, and need to determine which parts are true. Note how my Omegas of type 1 and 2 banish the paradox: in case 1 “the money is already there anyway” has become a plain simple lie, and in case 2 “Omega has already predicted your choice” becomes a lie when you’re inside Omega. I say the real world doesn’t have contradictions. Don’t ask me to reason approximately from contradictory assumptions.
You gotta decide something, faced with the situation. It doesn’t look like you argue that Newcomb’s test itself literally can’t be set up. So what do you mean by contradictions? The physical system itself can’t be false, only its description. Whatever contradictions you perceive in the test, they come from the problems of interpretation; the only relevant part of this whole endeavor is computing the decision.
The physical system can’t be false, but Omega seems to be lying to us. How do you, as a rationalist, deal when people contradict themselves verbally? You build models, like I did in the original post.
What’s wrong with you? If Omega tells us the conditions of the experiment (about “foretelling” and stuff), then Omega is lying. If someone else, then someone else. Let’s wrap this up, I’m sick.
As was pointed out numerous times, it well may be possible to foretell your actions, even by some variation on just reading this forum and looking what people claim to choose in the given situation. That you came up with specific examples that ridicule the claim of being able to predict your decision, doesn’t mean that there literally is no way to do that. Another, more detailed example, is what you listed as (2) simulation approach.
FWIW, I understand your frustration, but just as a data point I don’t think this reaction is warranted, and I say that as someone who likes most of your comments. I know you made this post in order to escape the rabbit hole, but you must have expected to spend a little time there digging when you made it!
Can you please explain why a rational decision theory cannot be applied?
As I understand it, perfect rationality in this scenario requires we assume some Bayesian prior over all possible implementations of Omega and do a ton of computation for each case. For example, some Omegas could be type 3 and deceivable with non-zero probability; we have to determine how. If we know which implementation we’re up against, the calculations are a little easier, e.g. in the “simulating Omega” case we just one-box without thinking.
By that definition of “perfect rationality” no two perfect rationalists can exist in the same universe, or any material universe in which the amount of elapsed time before a decision is always finite.
Some assumptions allow you to play some games rationally with finite resources, like in the last sentence of my previous comment. Unfortunately we aren’t given any such assumptions in Newcomb’s, so I fell back to the decision procedure recommended by you: Solomonoff induction. Don’t like it? Give me a workable model of Omega.
Yes, it’s true. Perfectly playing any non-mathematical “real world” game (the formulation Vladimir Nesov insists on) requires great powers. If you can translate the game into maths to make it solvable, please do.
The decision theory must allow approximations, a ranking allowing to find (recognize) as good a solution as possible, given the practical limitations.
You are reasoning from the faulty assumption that “surely it’s possible to formalize the problem somehow and do something”. The problem statement is self-contradictory. We need to resolve the contradiction. It’s only possible by making some part of the problem statement false. That’s what the prior over Omegas is for. We’ve been told some bullshit, and need to determine which parts are true. Note how my Omegas of type 1 and 2 banish the paradox: in case 1 “the money is already there anyway” has become a plain simple lie, and in case 2 “Omega has already predicted your choice” becomes a lie when you’re inside Omega. I say the real world doesn’t have contradictions. Don’t ask me to reason approximately from contradictory assumptions.
You gotta decide something, faced with the situation. It doesn’t look like you argue that Newcomb’s test itself literally can’t be set up. So what do you mean by contradictions? The physical system itself can’t be false, only its description. Whatever contradictions you perceive in the test, they come from the problems of interpretation; the only relevant part of this whole endeavor is computing the decision.
The physical system can’t be false, but Omega seems to be lying to us. How do you, as a rationalist, deal when people contradict themselves verbally? You build models, like I did in the original post.
Omega doesn’t lie by the statement of the problem. It doesn’t even assert anything, it just places the money in the box or doesn’t.
What’s wrong with you? If Omega tells us the conditions of the experiment (about “foretelling” and stuff), then Omega is lying. If someone else, then someone else. Let’s wrap this up, I’m sick.
As was pointed out numerous times, it well may be possible to foretell your actions, even by some variation on just reading this forum and looking what people claim to choose in the given situation. That you came up with specific examples that ridicule the claim of being able to predict your decision, doesn’t mean that there literally is no way to do that. Another, more detailed example, is what you listed as (2) simulation approach.
Case 3, “terminating Omega”, demonstrable contradiction.
I already explained where a “simulator Omega” has to lie to you.
Sorry, I don’t want to spend any more time on this discussion. Goodbye.
FWIW, I understand your frustration, but just as a data point I don’t think this reaction is warranted, and I say that as someone who likes most of your comments. I know you made this post in order to escape the rabbit hole, but you must have expected to spend a little time there digging when you made it!