Annoyance has it right but too cryptic: it’s the other way around. If your decision theory fails on this test ground but works perfectly well in the real world, maybe you need to work some more on the test ground.
Not quite. The failure of a strong decision theory on a test is a reason for you to start doubting the adequacy of both the test problem and the decision theory. The decision to amend one or the other must always come through you, unless you already trust something else more than you trust yourself. The paradox doesn’t care what you do, it is merely a building block towards better explication of what kinds of decisions you consider correct.
Woah, let’s have some common sense here instead of preaching. I have good reasons to trust accepted decision theories. What reason do I have to trust Newcomb’s problem? Given how much in my analysis turned out to depend on the implementation of Omega, I don’t trust the thing at all anymore. Do you? Why?
Uh, didn’t I convince you that, given any concrete implementation of Omega, the paradox utterly disappears? Let’s go at it again. What kind of Omega do you offer me?
The usual setting, you being a sufficiently simple mere human, not building your own Omegas in the process, going through the procedure in a controlled environment if that helps to get the case stronger, and Omega being able to predict your actual final decision, by whatever means it pleases. What the Omega does to predict your decision doesn’t affect you, shouldn’t concern you, it looks like only that it’s usually right is relevant.
“What the Omega does to predict your decision doesn’t affect you, shouldn’t concern you, it looks like only that it’s usually right is relevant.”
Is this the least convenient world? What Omega does to predict my decision does concern me, because it determines whether I should one-box or two-box. However, I’m willing to allow that in a LCW, I’m not given enough information. Is this the Newcomb “problem”, then—how to make rational decision when you’re not given enough information?
No perfectly rational decision theory can be applied in this case, just like you can’t play chess perfectly rationally with a desktop PC. Several comments above I outlined a good approximation that I would use and recommend a computer to use. This case is just… uninteresting. It doesn’t raise any question marks in my mind. It should?
As I understand it, perfect rationality in this scenario requires we assume some Bayesian prior over all possible implementations of Omega and do a ton of computation for each case. For example, some Omegas could be type 3 and deceivable with non-zero probability; we have to determine how. If we know which implementation we’re up against, the calculations are a little easier, e.g. in the “simulating Omega” case we just one-box without thinking.
By that definition of “perfect rationality” no two perfect rationalists can exist in the same universe, or any material universe in which the amount of elapsed time before a decision is always finite.
Some assumptions allow you to play some games rationally with finite resources, like in the last sentence of my previous comment. Unfortunately we aren’t given any such assumptions in Newcomb’s, so I fell back to the decision procedure recommended by you: Solomonoff induction. Don’t like it? Give me a workable model of Omega.
Yes, it’s true. Perfectly playing any non-mathematical “real world” game (the formulation Vladimir Nesov insists on) requires great powers. If you can translate the game into maths to make it solvable, please do.
You are reasoning from the faulty assumption that “surely it’s possible to formalize the problem somehow and do something”. The problem statement is self-contradictory. We need to resolve the contradiction. It’s only possible by making some part of the problem statement false. That’s what the prior over Omegas is for. We’ve been told some bullshit, and need to determine which parts are true. Note how my Omegas of type 1 and 2 banish the paradox: in case 1 “the money is already there anyway” has become a plain simple lie, and in case 2 “Omega has already predicted your choice” becomes a lie when you’re inside Omega. I say the real world doesn’t have contradictions. Don’t ask me to reason approximately from contradictory assumptions.
You gotta decide something, faced with the situation. It doesn’t look like you argue that Newcomb’s test itself literally can’t be set up. So what do you mean by contradictions? The physical system itself can’t be false, only its description. Whatever contradictions you perceive in the test, they come from the problems of interpretation; the only relevant part of this whole endeavor is computing the decision.
The physical system can’t be false, but Omega seems to be lying to us. How do you, as a rationalist, deal when people contradict themselves verbally? You build models, like I did in the original post.
What’s wrong with you? If Omega tells us the conditions of the experiment (about “foretelling” and stuff), then Omega is lying. If someone else, then someone else. Let’s wrap this up, I’m sick.
As was pointed out numerous times, it well may be possible to foretell your actions, even by some variation on just reading this forum and looking what people claim to choose in the given situation. That you came up with specific examples that ridicule the claim of being able to predict your decision, doesn’t mean that there literally is no way to do that. Another, more detailed example, is what you listed as (2) simulation approach.
FWIW, I understand your frustration, but just as a data point I don’t think this reaction is warranted, and I say that as someone who likes most of your comments. I know you made this post in order to escape the rabbit hole, but you must have expected to spend a little time there digging when you made it!
The problem setting itself shouldn’t raise many questions. If you agree that the right answer in this setting is to one-box, you probably understand the test. Next, look at the popular decision theories that calculate that the “correct” answer is to two-box. Find what’s wrong with those theories, or with the ways of applying them, and find a way to generalize them to handle this case and other cases correctly.
There’s nothing wrong with those theories. They are wrongly applied, selectively ignoring the part of the problem statement that explicitly says you can’t two-box if Omega decided you would one-box. Any naive application will do that because all standard theories assume causality, which is broken in this problem. Before applying decision theories we must work out what causes what. My original post was an attempt to do just that.
There’s nothing wrong with those theories. They are wrongly applied, selectively ignoring the part of the problem statement that explicitly says you can’t two-box if Omega decided you would one-box.
The decision is yours, Omega only foresees it. See also: Thou Art Physics.
Any naive application will do that because the problem statement is contradictory on the surface. Before applying decision theories, the contradiction has to be resolved somehow as we work out what causes what. My original post was an attempt to do just that.
Do that for the standard setting that I outlined above, instead of constructing its broken variations. What it means for something to cause something else, and how one should go about describing the situations in that model should arguably be a part of any decision theory.
the problem statement … explicitly says you can’t two-box if Omega decided you would one-box.
The decision is yours, Omega only foresees it.
These stop contradicting each other if you rephrase a little more precisely. It’s not that you can’t two-box if Omega decided you would one-box—you just don’t, because in order for Omega to have decided that, you must have also decided that. Or rather, been going to decide that—and if I understand the post you linked correctly, its point is that the difference between “my decision” and “the predetermination of my decision” is not meaningful.
As far as I can tell—and I’m new to this topic, so please forgive me if this is a juvenile observation—the flaw in the problem is that it cannot be true both that the contents of the boxes are determined by your choice (via Omega’s prediction), and that the contents have already been determined when you are making your choice. The argument for one-boxing assumes that, of those contradictory premises, the first one is true. The argument for two-boxing assumes that the second one is true.
The potential flaw in my description, in turn, is whether my simplification just now (“determined by your choice via Omega”) is actually equivalent to the way it’s put in the problem (“determined by Omega based on a prediction of you”). I think it is, for the reasons given above, but what do I know?
(I feel comfortable enough with this explanation that I’m quite confident I must be missing something.)
An aspiring Bayesian rationalist would behave like me in the original post: assume some prior over the possible implementations of Omega and work out what to do. So taboo “foresee” and propose some mechanisms as I, ciphergoth and Toby Ord did.
Not quite. The failure of a strong decision theory on a test is a reason for you to start doubting the adequacy of both the test problem and the decision theory. The decision to amend one or the other must always come through you, unless you already trust something else more than you trust yourself. The paradox doesn’t care what you do, it is merely a building block towards better explication of what kinds of decisions you consider correct.
Woah, let’s have some common sense here instead of preaching. I have good reasons to trust accepted decision theories. What reason do I have to trust Newcomb’s problem? Given how much in my analysis turned out to depend on the implementation of Omega, I don’t trust the thing at all anymore. Do you? Why?
You are not asked to trust anything. You have a paradox; resolve it, understand it. What do you refer to, when using the word “trust” above?
Uh, didn’t I convince you that, given any concrete implementation of Omega, the paradox utterly disappears? Let’s go at it again. What kind of Omega do you offer me?
The usual setting, you being a sufficiently simple mere human, not building your own Omegas in the process, going through the procedure in a controlled environment if that helps to get the case stronger, and Omega being able to predict your actual final decision, by whatever means it pleases. What the Omega does to predict your decision doesn’t affect you, shouldn’t concern you, it looks like only that it’s usually right is relevant.
“What the Omega does to predict your decision doesn’t affect you, shouldn’t concern you, it looks like only that it’s usually right is relevant.”
Is this the least convenient world? What Omega does to predict my decision does concern me, because it determines whether I should one-box or two-box. However, I’m willing to allow that in a LCW, I’m not given enough information. Is this the Newcomb “problem”, then—how to make rational decision when you’re not given enough information?
No perfectly rational decision theory can be applied in this case, just like you can’t play chess perfectly rationally with a desktop PC. Several comments above I outlined a good approximation that I would use and recommend a computer to use. This case is just… uninteresting. It doesn’t raise any question marks in my mind. It should?
Can you please explain why a rational decision theory cannot be applied?
As I understand it, perfect rationality in this scenario requires we assume some Bayesian prior over all possible implementations of Omega and do a ton of computation for each case. For example, some Omegas could be type 3 and deceivable with non-zero probability; we have to determine how. If we know which implementation we’re up against, the calculations are a little easier, e.g. in the “simulating Omega” case we just one-box without thinking.
By that definition of “perfect rationality” no two perfect rationalists can exist in the same universe, or any material universe in which the amount of elapsed time before a decision is always finite.
Some assumptions allow you to play some games rationally with finite resources, like in the last sentence of my previous comment. Unfortunately we aren’t given any such assumptions in Newcomb’s, so I fell back to the decision procedure recommended by you: Solomonoff induction. Don’t like it? Give me a workable model of Omega.
Yes, it’s true. Perfectly playing any non-mathematical “real world” game (the formulation Vladimir Nesov insists on) requires great powers. If you can translate the game into maths to make it solvable, please do.
The decision theory must allow approximations, a ranking allowing to find (recognize) as good a solution as possible, given the practical limitations.
You are reasoning from the faulty assumption that “surely it’s possible to formalize the problem somehow and do something”. The problem statement is self-contradictory. We need to resolve the contradiction. It’s only possible by making some part of the problem statement false. That’s what the prior over Omegas is for. We’ve been told some bullshit, and need to determine which parts are true. Note how my Omegas of type 1 and 2 banish the paradox: in case 1 “the money is already there anyway” has become a plain simple lie, and in case 2 “Omega has already predicted your choice” becomes a lie when you’re inside Omega. I say the real world doesn’t have contradictions. Don’t ask me to reason approximately from contradictory assumptions.
You gotta decide something, faced with the situation. It doesn’t look like you argue that Newcomb’s test itself literally can’t be set up. So what do you mean by contradictions? The physical system itself can’t be false, only its description. Whatever contradictions you perceive in the test, they come from the problems of interpretation; the only relevant part of this whole endeavor is computing the decision.
The physical system can’t be false, but Omega seems to be lying to us. How do you, as a rationalist, deal when people contradict themselves verbally? You build models, like I did in the original post.
Omega doesn’t lie by the statement of the problem. It doesn’t even assert anything, it just places the money in the box or doesn’t.
What’s wrong with you? If Omega tells us the conditions of the experiment (about “foretelling” and stuff), then Omega is lying. If someone else, then someone else. Let’s wrap this up, I’m sick.
As was pointed out numerous times, it well may be possible to foretell your actions, even by some variation on just reading this forum and looking what people claim to choose in the given situation. That you came up with specific examples that ridicule the claim of being able to predict your decision, doesn’t mean that there literally is no way to do that. Another, more detailed example, is what you listed as (2) simulation approach.
Case 3, “terminating Omega”, demonstrable contradiction.
I already explained where a “simulator Omega” has to lie to you.
Sorry, I don’t want to spend any more time on this discussion. Goodbye.
FWIW, I understand your frustration, but just as a data point I don’t think this reaction is warranted, and I say that as someone who likes most of your comments. I know you made this post in order to escape the rabbit hole, but you must have expected to spend a little time there digging when you made it!
The problem setting itself shouldn’t raise many questions. If you agree that the right answer in this setting is to one-box, you probably understand the test. Next, look at the popular decision theories that calculate that the “correct” answer is to two-box. Find what’s wrong with those theories, or with the ways of applying them, and find a way to generalize them to handle this case and other cases correctly.
There’s nothing wrong with those theories. They are wrongly applied, selectively ignoring the part of the problem statement that explicitly says you can’t two-box if Omega decided you would one-box. Any naive application will do that because all standard theories assume causality, which is broken in this problem. Before applying decision theories we must work out what causes what. My original post was an attempt to do just that.
What other cases?
The decision is yours, Omega only foresees it. See also: Thou Art Physics.
Do that for the standard setting that I outlined above, instead of constructing its broken variations. What it means for something to cause something else, and how one should go about describing the situations in that model should arguably be a part of any decision theory.
These stop contradicting each other if you rephrase a little more precisely. It’s not that you can’t two-box if Omega decided you would one-box—you just don’t, because in order for Omega to have decided that, you must have also decided that. Or rather, been going to decide that—and if I understand the post you linked correctly, its point is that the difference between “my decision” and “the predetermination of my decision” is not meaningful.
As far as I can tell—and I’m new to this topic, so please forgive me if this is a juvenile observation—the flaw in the problem is that it cannot be true both that the contents of the boxes are determined by your choice (via Omega’s prediction), and that the contents have already been determined when you are making your choice. The argument for one-boxing assumes that, of those contradictory premises, the first one is true. The argument for two-boxing assumes that the second one is true.
The potential flaw in my description, in turn, is whether my simplification just now (“determined by your choice via Omega”) is actually equivalent to the way it’s put in the problem (“determined by Omega based on a prediction of you”). I think it is, for the reasons given above, but what do I know?
(I feel comfortable enough with this explanation that I’m quite confident I must be missing something.)
An aspiring Bayesian rationalist would behave like me in the original post: assume some prior over the possible implementations of Omega and work out what to do. So taboo “foresee” and propose some mechanisms as I, ciphergoth and Toby Ord did.