But making Omega less accurate doesn’t alleviate the bizarreness of Omega. The incredible thing isn’t that Omega is accurate. It is that his “predictions” are influenced (acausaly?) by future events. Decreasing the accuracy of the predictions just makes it harder to do the experiments that shows conclusively that Omega is doing something supernatural. It doesn’t make what he does any less supernatural.
Actually, Omega’s prediction and your action are both the result of a common cause (at least under a model of the situation that meets the given problem constraints—see EY’s justification in the case of Newcomb’s problem [1].) This doesn’t require backwards-flowing causality.
See also Anna Salamon’s article about the multiple Newcomb’s problem causal models.
[1] This article. The paragraph beginning with the words “From this, I would argue, TDT follows.” goes over the constraints that lead EY to posit the causal model I just gave.
With all due respect, I have to disagree. My decision, made now, is modeled to change the output of an algorithm which, in reality, spit out its result some time ago.
Universe: Make a decision. Me: What are my choices? Universe: You don’t have any choices. Your response was determined long ago. Me: Uh, so how am I supposed to decide now? Universe: Just tell me which result you would prefer. Me: The one that gives me the most utility. Universe: Poof. Congratulations, you have made the best decision. Thank you for chosing to use TDT, the decision theory which makes use of the secret power of the quantum to make you rich.
Yeah, I’m being a bit unfair. but, as applied to human decision making, it still looks to me that there is causation (i.e. information) running back in time from my “free will” decision today to some “critical nexus” in the past.
Are you up-to-date on the free will sequence? Now would be a good time, as it sorts out the concepts of free will, determinism, and choice.
Because I never send someone off to read something as my response without summarizing what I except them to learn: You are still making a choice, even if you are in a deterministic world. A computer program applied to Parfit’s Hitchhiker makes a choice in basically the same sense that you make a choice when you’re in it.
With that in mind, you can actually experiment with what it’s like to be Omega. Assume that you are given the source code of a program applicable to Parfit’s Hitchhiker. You’re allowed to review it, and you decide whether to choose “rescue” based on whether you expect that the program will output “pay” after waking up, and then it runs.
In that case, the program is making a choice. You’re making a perfect prediction [1] of its choice. But where’s the reverse causation?
[1] except to the extent the program uses random predicates, in which case you figure out the probability of being paid, and if this justifies a rescue.
I’m pretty sure I have read all of the free will sequence. I am a compatibilist, and have been since before EY was born. I am quite happy with analyses that have something assumed free at one level (of reduction) and determined at another level. I still get a very bad feeling about Omega scenarios. My intuition tells me that there is some kind of mind projection fallacy being committed. But I can’t put my finger on exactly where it is.
I appreciate that the key question in any form of decision theory is how you handle the counter-factual “surgery”. I like Pearl’s rules for counter-factual surgery: If you are going to assume that some node is free, and to be modeled as controled by someone’s “free decision” rather than by its ordinary causal links, then the thing to do is to surgically sever the causal links as close to the decision node as possible. This modeling policy strikes me as simply common sense. My gut tells me that something is being done wrong when the surgery is pushed back “causally upstream” - to a point in time before the modeled “free decision”.
I understand that if we are talking about the published “decision making” source code of a robot, then the true “free decision” is actually made back there upstream in the past. And that if Omega reads the code, then he can make pretty good predictions. What I don’t understand is why the problem is not expressed this way from the beginning.
“A robot in the desert need its battery charged soon. A motorist passes by, checks the model number, looks up the robot specs online, and then drives on, knowing this robot doesn’t do reciprocity.” A nice simple story. Maybe the robot designer should have built in reciprocity. Maybe he will design differently next time. No muss, no fuss, no paradox.
I suppose there is not much point continuing to argue about it. Omega strikes me as both wrong and useless, but I am not having much luck convincing others. What I really should do is just shut up on the subject and simply cringe quietly whenever Omega’s name is mentioned.
Thanks for a good conversation on the subject, though.
What I don’t understand is why the problem is not expressed this way from
the beginning.
I don’t know for sure—but perhaps a memetic analaysis of paradoxes might throw light on the issue:
Famous paradoxes are often the ones that cause the most confusion and discussion. Debates and arguments make for good fun and drama—and so are copied around by the participants. If you think about it that way, finding a “paradox” that is confusingly expressed may not be such a surprise.
Another example would be: why does the mirror reverse left and right but not up and down?
There, the wrong way of looking at the problem seems to be built into the question.
That is either profound, or it is absurd. I will have to consider it.
I’ve always assumed that the whole point of decision theory is to give normative guidance to decision makers. But in this case, I guess we have two decision makers to consider—robot and robot designer—operating at different levels of reduction and at different times. To say nothing of any decisions that may or may not be being made by this Omega fellow.
My head aches. Up to now, I have thought that we don’t need to think about “meta-decision theory”. Now I am not sure.
Mostly we want well-behaved robots—so the moral seems to be to get the robot maker to build a better robot that has a good reputation and can make credible commitments.
I think we discussed that before—if you think you can behave unpredictably and outwit Omega, then to stay in the spirit of the problem you have to imagine you have built a deterministic robot, published its source code—and it will be visited by Omega (or maybe just an expert programmer).
I am not trying to outwit anyone. I bear Omega no ill will. I look forward to being visited by that personage.
But I really doubt that your robot problem is really “in the spirit” of the original. Because, if it is, I can’t see why the original formulation still exists.
Well, sure—for one thing, in the scenarios here, Omega is often bearing gifts!
You are supposed to treat the original formulation in the same way as the robot one, IMO. You are supposed to believe that a superbeing who knows your source code can actually exist—and that you are not being fooled or lied to.
If your problem is that you doubt that premise, then it seems appropriate to get you to consider a rearranged version of the problem—where the premise is more reasonable—otherwise you can use your scepticism to avoid considering the intended problem.
The robot formulation is more complex—and that is one reason for it not being the usual presentation of the problem. However, if you bear in mind the reason for many people here being interested in optimal decision theory in the first place, I hope you can see that it is a reasonable scenario to consider.
FWIW, much the same goes for your analysis of the hitch-hiker problem. There your analysis is even more tempting—but you are still dodging the “spirit” of the problem.
But making Omega less accurate doesn’t alleviate the bizarreness of Omega. The incredible thing isn’t that Omega is accurate. It is that his “predictions” are influenced (acausaly?) by future events. Decreasing the accuracy of the predictions just makes it harder to do the experiments that shows conclusively that Omega is doing something supernatural. It doesn’t make what he does any less supernatural.
Actually, Omega’s prediction and your action are both the result of a common cause (at least under a model of the situation that meets the given problem constraints—see EY’s justification in the case of Newcomb’s problem [1].) This doesn’t require backwards-flowing causality.
See also Anna Salamon’s article about the multiple Newcomb’s problem causal models.
[1] This article. The paragraph beginning with the words “From this, I would argue, TDT follows.” goes over the constraints that lead EY to posit the causal model I just gave.
With all due respect, I have to disagree. My decision, made now, is modeled to change the output of an algorithm which, in reality, spit out its result some time ago.
Universe: Make a decision.
Me: What are my choices?
Universe: You don’t have any choices. Your response was determined long ago.
Me: Uh, so how am I supposed to decide now?
Universe: Just tell me which result you would prefer.
Me: The one that gives me the most utility.
Universe: Poof. Congratulations, you have made the best decision. Thank you for chosing to use TDT, the decision theory which makes use of the secret power of the quantum to make you rich.
Yeah, I’m being a bit unfair. but, as applied to human decision making, it still looks to me that there is causation (i.e. information) running back in time from my “free will” decision today to some “critical nexus” in the past.
Are you up-to-date on the free will sequence? Now would be a good time, as it sorts out the concepts of free will, determinism, and choice.
Because I never send someone off to read something as my response without summarizing what I except them to learn: You are still making a choice, even if you are in a deterministic world. A computer program applied to Parfit’s Hitchhiker makes a choice in basically the same sense that you make a choice when you’re in it.
With that in mind, you can actually experiment with what it’s like to be Omega. Assume that you are given the source code of a program applicable to Parfit’s Hitchhiker. You’re allowed to review it, and you decide whether to choose “rescue” based on whether you expect that the program will output “pay” after waking up, and then it runs.
In that case, the program is making a choice. You’re making a perfect prediction [1] of its choice. But where’s the reverse causation?
[1] except to the extent the program uses random predicates, in which case you figure out the probability of being paid, and if this justifies a rescue.
I’m pretty sure I have read all of the free will sequence. I am a compatibilist, and have been since before EY was born. I am quite happy with analyses that have something assumed free at one level (of reduction) and determined at another level. I still get a very bad feeling about Omega scenarios. My intuition tells me that there is some kind of mind projection fallacy being committed. But I can’t put my finger on exactly where it is.
I appreciate that the key question in any form of decision theory is how you handle the counter-factual “surgery”. I like Pearl’s rules for counter-factual surgery: If you are going to assume that some node is free, and to be modeled as controled by someone’s “free decision” rather than by its ordinary causal links, then the thing to do is to surgically sever the causal links as close to the decision node as possible. This modeling policy strikes me as simply common sense. My gut tells me that something is being done wrong when the surgery is pushed back “causally upstream” - to a point in time before the modeled “free decision”.
I understand that if we are talking about the published “decision making” source code of a robot, then the true “free decision” is actually made back there upstream in the past. And that if Omega reads the code, then he can make pretty good predictions. What I don’t understand is why the problem is not expressed this way from the beginning.
“A robot in the desert need its battery charged soon. A motorist passes by, checks the model number, looks up the robot specs online, and then drives on, knowing this robot doesn’t do reciprocity.” A nice simple story. Maybe the robot designer should have built in reciprocity. Maybe he will design differently next time. No muss, no fuss, no paradox.
I suppose there is not much point continuing to argue about it. Omega strikes me as both wrong and useless, but I am not having much luck convincing others. What I really should do is just shut up on the subject and simply cringe quietly whenever Omega’s name is mentioned.
Thanks for a good conversation on the subject, though.
I don’t know for sure—but perhaps a memetic analaysis of paradoxes might throw light on the issue:
Famous paradoxes are often the ones that cause the most confusion and discussion. Debates and arguments make for good fun and drama—and so are copied around by the participants. If you think about it that way, finding a “paradox” that is confusingly expressed may not be such a surprise.
Another example would be: why does the mirror reverse left and right but not up and down?
There, the wrong way of looking at the problem seems to be built into the question.
( Feynman’s answer ).
Because the point is to explain to the robot why it’s not getting its battery charged?
That is either profound, or it is absurd. I will have to consider it.
I’ve always assumed that the whole point of decision theory is to give normative guidance to decision makers. But in this case, I guess we have two decision makers to consider—robot and robot designer—operating at different levels of reduction and at different times. To say nothing of any decisions that may or may not be being made by this Omega fellow.
My head aches. Up to now, I have thought that we don’t need to think about “meta-decision theory”. Now I am not sure.
Mostly we want well-behaved robots—so the moral seems to be to get the robot maker to build a better robot that has a good reputation and can make credible commitments.
Hm, that robot example would actually be a better way to go about it...
I think we discussed that before—if you think you can behave unpredictably and outwit Omega, then to stay in the spirit of the problem you have to imagine you have built a deterministic robot, published its source code—and it will be visited by Omega (or maybe just an expert programmer).
I am not trying to outwit anyone. I bear Omega no ill will. I look forward to being visited by that personage.
But I really doubt that your robot problem is really “in the spirit” of the original. Because, if it is, I can’t see why the original formulation still exists.
Well, sure—for one thing, in the scenarios here, Omega is often bearing gifts!
You are supposed to treat the original formulation in the same way as the robot one, IMO. You are supposed to believe that a superbeing who knows your source code can actually exist—and that you are not being fooled or lied to.
If your problem is that you doubt that premise, then it seems appropriate to get you to consider a rearranged version of the problem—where the premise is more reasonable—otherwise you can use your scepticism to avoid considering the intended problem.
The robot formulation is more complex—and that is one reason for it not being the usual presentation of the problem. However, if you bear in mind the reason for many people here being interested in optimal decision theory in the first place, I hope you can see that it is a reasonable scenario to consider.
FWIW, much the same goes for your analysis of the hitch-hiker problem. There your analysis is even more tempting—but you are still dodging the “spirit” of the problem.
You mean that he predicts future events? That is sometimes possible to do—in cases where they are reasonbly-well determined by the current situation.