So because I disagree with your consensus, my rational objection must be wrong?
I didn’t change the scenario. I looked at the scenario, and asked what someone applying CDT rationally, who understood that it’s impossible to tell whether you’re being simulated or not, would do.
And, as it happened, I got the answer “they would one-box, because they’re probably a simulation”.
If I posted a scenario where an EDT person would choose to walk through a minefield, because they’ve never seen anyone walk through a minefield and thus don’t consider walking through a minefield to be evidence that they won’t live much longer, would you not think my scenario-crafting skills were a bit weak?
So because I disagree with your consensus, my rational objection must be wrong?
Not wrong, beside the point. Objections like that don’t touch the core of the problem at all. Finding clever ways for decision theory differences in example cases not to matter doesn’t change the validity of the decision theories.
Your mine field example is different in that the original formulation of Newcomb’s problem gets the point across for almost everyone while I’m not sure what the point in the mine field example would be. That EDT would be even stupider than it already is if it restricted what kinds of evidence could be considered? Well, yes, of course. I won’t defend EDT, it’s wronger than CDT (though at least a bit better defined).
Not wrong, beside the point. Objections like that don’t touch the core of the problem at all. Finding clever ways for decision theory differences in example cases not to matter doesn’t change the validity of the decision theories.
CDT is seemingly imperfect. I have acknowledged such.
But pointing to CDT as failing when it doesn’t fail doesn’t help. Pointing to where it DOES fail helps.
When I see someone getting the right answer for the wrong reason I criticise their reasoning.
The point you should take away from newcomb’s paradox isn’t that CDT fails (in some formulations it seems to, in others it’s just hard to apply) it’s that CDT is really hard to apply, so using something that gets the right answer easily is better.
Newcomb’s problem tries to show that CDT only caring about things caused by your decisions afterwards can be a weakness by providing an example where things caused by accurate predictions of your decisions outweight those things. Everything else is just window dressing. You are using the window dressing to explain how you care about these other things caused by the decision, so you coincidentally act just as if you also cared about the causes of accurate predictions of your decisions. But as long as you make out the things caused by the decision that should, according to the intention of the problem statement, cause the less desirable things afterwards actually cause more desirable things afterwards you are not addressing Newcomb’s problem. You are just showing that what is a particular formulation of Newcomb’s problem for most people isn’t a formulation of Newcomb’s problem for you. In a way that doesn’t generalize.
The “accurate prediction” is a central part of Newcomb’s problem. The issue of whether it’s possible (I feel it is) and IN WHAT WAYS it is possible, are central to the validity of Newcomb’s problem.
If all possible ways of the accurate prediction were to make CDT work, then Newcomb’s problem wouldn’t be a problem for CDT. (apart from the practical one of it being hard to apply correctly)
At present, it seems like there are possible ways that make CDT work, and possible ways that make CDT not work. If it were to someday be proved that all possible ways make CDT work, that would be a major proof. If it were to be proved (beyond all doubt) that a possible way was completely incompatible with CDT, that could also be important for AI creation.
I will admit, given the prevalence of CDT users to fail in this scenario, my objection isn’t that strong, CDT tends to lead people to the wrong answer in this scenario, so it’s not useful to them.
I suggest that the way you use ‘CDT’ is actually a hop and a jump in the direction of TDT. When you already have a box containing $1,000,000 in your hand you are looking at a $10,000 sitting on the table and deciding not to take it. Even though you know that nothing you do now has any way of causing the money you already have to disappear. Pure CDT agents just don’t do that.
If you don’t know whether you’re a simulation or not, you don’t know whether or not your taking the second box will cause the real-world money not to be there.
And, as a simulation, you probably won’t get to spend any of that sim-world money you’ve got there.
To be fair, I don’t particularly use CDT consciously, because it seems to be flawed somehow (or at least, harder to use than intuition, and I’m lazy). But I came across newcomb’s paradox, thought about it, and realised that in the traditional formulation I’m probably a simulation.
I don’t see why realising I’m probably a simulation is something a CDT agent can’t do?
If you don’t know whether you’re a simulation or not, you don’t know whether or not your taking the second box will cause the real-world money not to be there. And, as a simulation, you probably won’t get to spend any of that sim-world money you’ve got there.
Replace ‘Omega’ with Patrick Jane. No sims. What do you do?
A) I one-box. I will one-box in most reasonable scenarios.
B)How do you predict other people’s actions?
Personally, I mentally simulate them. Not particularly well, mind, but I do mentally simulate them.
Am I unusual in this?
I’ve never watched the Mentalist, but if Patrick Jane is sufficiently good to get a 99% success rate, I’m guessing his simulations are pretty damn good.
So because I disagree with your consensus, my rational objection must be wrong?
I didn’t change the scenario. I looked at the scenario, and asked what someone applying CDT rationally, who understood that it’s impossible to tell whether you’re being simulated or not, would do. And, as it happened, I got the answer “they would one-box, because they’re probably a simulation”.
If I posted a scenario where an EDT person would choose to walk through a minefield, because they’ve never seen anyone walk through a minefield and thus don’t consider walking through a minefield to be evidence that they won’t live much longer, would you not think my scenario-crafting skills were a bit weak?
Not wrong, beside the point. Objections like that don’t touch the core of the problem at all. Finding clever ways for decision theory differences in example cases not to matter doesn’t change the validity of the decision theories.
Your mine field example is different in that the original formulation of Newcomb’s problem gets the point across for almost everyone while I’m not sure what the point in the mine field example would be. That EDT would be even stupider than it already is if it restricted what kinds of evidence could be considered? Well, yes, of course. I won’t defend EDT, it’s wronger than CDT (though at least a bit better defined).
CDT is seemingly imperfect. I have acknowledged such.
But pointing to CDT as failing when it doesn’t fail doesn’t help. Pointing to where it DOES fail helps.
When I see someone getting the right answer for the wrong reason I criticise their reasoning.
The point you should take away from newcomb’s paradox isn’t that CDT fails (in some formulations it seems to, in others it’s just hard to apply) it’s that CDT is really hard to apply, so using something that gets the right answer easily is better.
Newcomb’s problem tries to show that CDT only caring about things caused by your decisions afterwards can be a weakness by providing an example where things caused by accurate predictions of your decisions outweight those things. Everything else is just window dressing. You are using the window dressing to explain how you care about these other things caused by the decision, so you coincidentally act just as if you also cared about the causes of accurate predictions of your decisions. But as long as you make out the things caused by the decision that should, according to the intention of the problem statement, cause the less desirable things afterwards actually cause more desirable things afterwards you are not addressing Newcomb’s problem. You are just showing that what is a particular formulation of Newcomb’s problem for most people isn’t a formulation of Newcomb’s problem for you. In a way that doesn’t generalize.
The “accurate prediction” is a central part of Newcomb’s problem. The issue of whether it’s possible (I feel it is) and IN WHAT WAYS it is possible, are central to the validity of Newcomb’s problem.
If all possible ways of the accurate prediction were to make CDT work, then Newcomb’s problem wouldn’t be a problem for CDT. (apart from the practical one of it being hard to apply correctly)
At present, it seems like there are possible ways that make CDT work, and possible ways that make CDT not work. If it were to someday be proved that all possible ways make CDT work, that would be a major proof. If it were to be proved (beyond all doubt) that a possible way was completely incompatible with CDT, that could also be important for AI creation.
I suggest that the way you use ‘CDT’ is actually a hop and a jump in the direction of TDT. When you already have a box containing $1,000,000 in your hand you are looking at a $10,000 sitting on the table and deciding not to take it. Even though you know that nothing you do now has any way of causing the money you already have to disappear. Pure CDT agents just don’t do that.
If you don’t know whether you’re a simulation or not, you don’t know whether or not your taking the second box will cause the real-world money not to be there. And, as a simulation, you probably won’t get to spend any of that sim-world money you’ve got there.
To be fair, I don’t particularly use CDT consciously, because it seems to be flawed somehow (or at least, harder to use than intuition, and I’m lazy). But I came across newcomb’s paradox, thought about it, and realised that in the traditional formulation I’m probably a simulation.
I don’t see why realising I’m probably a simulation is something a CDT agent can’t do?
Replace ‘Omega’ with Patrick Jane. No sims. What do you do?
A) I one-box. I will one-box in most reasonable scenarios.
B)How do you predict other people’s actions?
Personally, I mentally simulate them. Not particularly well, mind, but I do mentally simulate them. Am I unusual in this?
I’ve never watched the Mentalist, but if Patrick Jane is sufficiently good to get a 99% success rate, I’m guessing his simulations are pretty damn good.
Patrick Jane is a fictional character in the TV show The Mentalist. He’s a former (fake) psychic who now uses his cold reading skills to fight crime.
Cheers, had been looking that up, oddly my edit to my post didn’t seem to update it.