I recommend reading the off-site lead-in post Ungrateful Hitchhikers to see why the above points don’t address some of the implications of the argument Silas is making.
I’ve now read it. I’ll set aside the fact that he is attempting to model owners of intellectual property as omniscient. I guess he is trying to slip in that old “But what if everybody did that?” argument. See, Omega-IP-owner knows that if you are an IP pirate, so is everyone else, so he won’t even generate IP. So everyone dies in the desert. Well, I tend to think that Joseph Heller in “Catch 22″ had the best answer to the “What if everyone did it?” gambit: “Well if everyone else did it, then I would be a damn fool to do any differently, wouldn’t I?”
The right parable for the argument SilasBarta is trying to make comes from biology—from gene-clone selection theory (roughly Dawkins’s Selfish gene). Suppose you are a red flower in a field of red flowers. Along comes a bee, hoping to pick up a little nectar. But what you really want is the pollen the bee carries, or maybe you want the bee to pick up your pollen. The question is whether you should actually provide nectar to the bee. She has already done what you wanted her to do. Giving her some nectar doesn’t cost you very much, but it does cost something. So why pay the bee her nectar?
The answer is that you should give the bee the nectar because all the other flowers in the field are your siblings—if your genes tell you to stiff the bee, then their genes tell them the same. So the bee stops at just a few red flowers, comes up dry each time, and decides to try the white flowers in the next field. Jackpot! The bee returns to the hive, and soon there are hundreds of bees busily pollenating the white flowers. And next year, no more red flowers.
There, the parable works and we didn’t even have to assume that the bee is omniscient.
Incidentally, if we now go back and look at my analysis of the Hitchhiker you will notice that my solution works because the driver expects almost every person he encounters to have an “honor module”. He doesn’t know for sure that the hitchhiker’s honor is still intact, but it seems like a reasonable bet. Just as the bee guesses that the next flower she visits will provide nectar. Just as the author of “Steal this Book” guesses that most people won’t.
I still much prefer my own analysis over that of the OP.
Okay, I think I see the source of the disconnect: Though my examples involve an omniscient being, that’s not actually necessary for the points to hold. It’s just looking at an extreme end. It would remain optimal to pay even if Omega were only 90% accurate, or 60%, etc.
As for the decision-theoretics of “what if everyone did it?” type reasoning, there’s a lot more to consider than what you’ve given. (A few relevant articles.) Most importantly, by making a choice, you’re setting the logical output of all sufficiently similar processes, not just your own.
In a world of identical beings, they would all “wake up” from any Prisoner’s Dilemma situation finding that they had both defected, or both cooperated. Viewed in this light, it makes sense to cooperate, since it will mean waking up in the pure-cooperation world, even though your decision to cooperate did not literally cause the other parties to cooperate (and even though you perceive it this way).
Making the situation more realistic does not change this conclusion either. Imagine you are positively, but not perfectly, correlated with the other beings; and that you go through thousands of PDs at once with different partners. In that case, you can defect, and wake up having found partners that cooperated. Maybe there are many such partners. However, from the fact that you regard it as optimal to always defect, it follows that you will wake up in a world with more defecting partners than if you had regarded it as optimal in such situations to cooperate.
As before, your decision does not cause others to cooperate, but it does influence what world you wake up in.
(Edit: And likewise, for the case of IP, if you defect, you will (arguably) find that you wake up in a world where you get lots of great music for free … but a fundamentally different world, that’s maybe not as pleasant as it could be...)
The bee situation you described is very similar to the parent-child problem I described: parents that don’t care for their children don’t get their genes into the next generation. And likewise, flowers that don’t give nectar don’t get their genes into the next generation. It is this gene-centeredness that can create an incentive structure/decision theory capable of such “unselfish” decisions!
Though my examples involve an omniscient being, that’s not actually necessary for the points to hold. It’s just looking at an extreme end. It would remain optimal to pay even if Omega were only 90% accurate, or 60%, etc.
Since I read your IP example a while ago, this seemed obvious to me, but I guess it should be emphasized in the text more strongly than it currently is.
But making Omega less accurate doesn’t alleviate the bizarreness of Omega. The incredible thing isn’t that Omega is accurate. It is that his “predictions” are influenced (acausaly?) by future events. Decreasing the accuracy of the predictions just makes it harder to do the experiments that shows conclusively that Omega is doing something supernatural. It doesn’t make what he does any less supernatural.
Actually, Omega’s prediction and your action are both the result of a common cause (at least under a model of the situation that meets the given problem constraints—see EY’s justification in the case of Newcomb’s problem [1].) This doesn’t require backwards-flowing causality.
See also Anna Salamon’s article about the multiple Newcomb’s problem causal models.
[1] This article. The paragraph beginning with the words “From this, I would argue, TDT follows.” goes over the constraints that lead EY to posit the causal model I just gave.
With all due respect, I have to disagree. My decision, made now, is modeled to change the output of an algorithm which, in reality, spit out its result some time ago.
Universe: Make a decision. Me: What are my choices? Universe: You don’t have any choices. Your response was determined long ago. Me: Uh, so how am I supposed to decide now? Universe: Just tell me which result you would prefer. Me: The one that gives me the most utility. Universe: Poof. Congratulations, you have made the best decision. Thank you for chosing to use TDT, the decision theory which makes use of the secret power of the quantum to make you rich.
Yeah, I’m being a bit unfair. but, as applied to human decision making, it still looks to me that there is causation (i.e. information) running back in time from my “free will” decision today to some “critical nexus” in the past.
Are you up-to-date on the free will sequence? Now would be a good time, as it sorts out the concepts of free will, determinism, and choice.
Because I never send someone off to read something as my response without summarizing what I except them to learn: You are still making a choice, even if you are in a deterministic world. A computer program applied to Parfit’s Hitchhiker makes a choice in basically the same sense that you make a choice when you’re in it.
With that in mind, you can actually experiment with what it’s like to be Omega. Assume that you are given the source code of a program applicable to Parfit’s Hitchhiker. You’re allowed to review it, and you decide whether to choose “rescue” based on whether you expect that the program will output “pay” after waking up, and then it runs.
In that case, the program is making a choice. You’re making a perfect prediction [1] of its choice. But where’s the reverse causation?
[1] except to the extent the program uses random predicates, in which case you figure out the probability of being paid, and if this justifies a rescue.
I’m pretty sure I have read all of the free will sequence. I am a compatibilist, and have been since before EY was born. I am quite happy with analyses that have something assumed free at one level (of reduction) and determined at another level. I still get a very bad feeling about Omega scenarios. My intuition tells me that there is some kind of mind projection fallacy being committed. But I can’t put my finger on exactly where it is.
I appreciate that the key question in any form of decision theory is how you handle the counter-factual “surgery”. I like Pearl’s rules for counter-factual surgery: If you are going to assume that some node is free, and to be modeled as controled by someone’s “free decision” rather than by its ordinary causal links, then the thing to do is to surgically sever the causal links as close to the decision node as possible. This modeling policy strikes me as simply common sense. My gut tells me that something is being done wrong when the surgery is pushed back “causally upstream” - to a point in time before the modeled “free decision”.
I understand that if we are talking about the published “decision making” source code of a robot, then the true “free decision” is actually made back there upstream in the past. And that if Omega reads the code, then he can make pretty good predictions. What I don’t understand is why the problem is not expressed this way from the beginning.
“A robot in the desert need its battery charged soon. A motorist passes by, checks the model number, looks up the robot specs online, and then drives on, knowing this robot doesn’t do reciprocity.” A nice simple story. Maybe the robot designer should have built in reciprocity. Maybe he will design differently next time. No muss, no fuss, no paradox.
I suppose there is not much point continuing to argue about it. Omega strikes me as both wrong and useless, but I am not having much luck convincing others. What I really should do is just shut up on the subject and simply cringe quietly whenever Omega’s name is mentioned.
Thanks for a good conversation on the subject, though.
What I don’t understand is why the problem is not expressed this way from
the beginning.
I don’t know for sure—but perhaps a memetic analaysis of paradoxes might throw light on the issue:
Famous paradoxes are often the ones that cause the most confusion and discussion. Debates and arguments make for good fun and drama—and so are copied around by the participants. If you think about it that way, finding a “paradox” that is confusingly expressed may not be such a surprise.
Another example would be: why does the mirror reverse left and right but not up and down?
There, the wrong way of looking at the problem seems to be built into the question.
That is either profound, or it is absurd. I will have to consider it.
I’ve always assumed that the whole point of decision theory is to give normative guidance to decision makers. But in this case, I guess we have two decision makers to consider—robot and robot designer—operating at different levels of reduction and at different times. To say nothing of any decisions that may or may not be being made by this Omega fellow.
My head aches. Up to now, I have thought that we don’t need to think about “meta-decision theory”. Now I am not sure.
Mostly we want well-behaved robots—so the moral seems to be to get the robot maker to build a better robot that has a good reputation and can make credible commitments.
I think we discussed that before—if you think you can behave unpredictably and outwit Omega, then to stay in the spirit of the problem you have to imagine you have built a deterministic robot, published its source code—and it will be visited by Omega (or maybe just an expert programmer).
I am not trying to outwit anyone. I bear Omega no ill will. I look forward to being visited by that personage.
But I really doubt that your robot problem is really “in the spirit” of the original. Because, if it is, I can’t see why the original formulation still exists.
Well, sure—for one thing, in the scenarios here, Omega is often bearing gifts!
You are supposed to treat the original formulation in the same way as the robot one, IMO. You are supposed to believe that a superbeing who knows your source code can actually exist—and that you are not being fooled or lied to.
If your problem is that you doubt that premise, then it seems appropriate to get you to consider a rearranged version of the problem—where the premise is more reasonable—otherwise you can use your scepticism to avoid considering the intended problem.
The robot formulation is more complex—and that is one reason for it not being the usual presentation of the problem. However, if you bear in mind the reason for many people here being interested in optimal decision theory in the first place, I hope you can see that it is a reasonable scenario to consider.
FWIW, much the same goes for your analysis of the hitch-hiker problem. There your analysis is even more tempting—but you are still dodging the “spirit” of the problem.
The answer is that you should give the bee the nectar because all the other flowers in the field are your siblings
Isn’t this group selectionism? Surely the much more likely explanation is that producing more or better nectar attracts the bee to you over all the other red flowers.
I would prefer to call it kin selection, but some people might call it group selection. It is one of the few kinds of group selection that actually work.
Surely the much more likely explanation is that producing more or better nectar attracts the bee to you over all the other red flowers.
That wasn’t part of my scenario, nor (as far as I know) biologically realistic. It is my bright red color that attracts the bee, and in this regard I am competing with my sibs. But the bee has no sense organ that can remotely detect the nectar. It has to actually land and do the pollen transfer bit before it finds out whether the nectar is really there. So, it is important that I don’t provide the color before I am ready with nectar and the sexy stuff. Else I have either wasted nectar or pissed off the bee.
I’ve now read it. I’ll set aside the fact that he is attempting to model owners of intellectual property as omniscient. I guess he is trying to slip in that old “But what if everybody did that?” argument. See, Omega-IP-owner knows that if you are an IP pirate, so is everyone else, so he won’t even generate IP. So everyone dies in the desert. Well, I tend to think that Joseph Heller in “Catch 22″ had the best answer to the “What if everyone did it?” gambit: “Well if everyone else did it, then I would be a damn fool to do any differently, wouldn’t I?”
The right parable for the argument SilasBarta is trying to make comes from biology—from gene-clone selection theory (roughly Dawkins’s Selfish gene). Suppose you are a red flower in a field of red flowers. Along comes a bee, hoping to pick up a little nectar. But what you really want is the pollen the bee carries, or maybe you want the bee to pick up your pollen. The question is whether you should actually provide nectar to the bee. She has already done what you wanted her to do. Giving her some nectar doesn’t cost you very much, but it does cost something. So why pay the bee her nectar?
The answer is that you should give the bee the nectar because all the other flowers in the field are your siblings—if your genes tell you to stiff the bee, then their genes tell them the same. So the bee stops at just a few red flowers, comes up dry each time, and decides to try the white flowers in the next field. Jackpot! The bee returns to the hive, and soon there are hundreds of bees busily pollenating the white flowers. And next year, no more red flowers.
There, the parable works and we didn’t even have to assume that the bee is omniscient.
Incidentally, if we now go back and look at my analysis of the Hitchhiker you will notice that my solution works because the driver expects almost every person he encounters to have an “honor module”. He doesn’t know for sure that the hitchhiker’s honor is still intact, but it seems like a reasonable bet. Just as the bee guesses that the next flower she visits will provide nectar. Just as the author of “Steal this Book” guesses that most people won’t.
I still much prefer my own analysis over that of the OP.
Okay, I think I see the source of the disconnect: Though my examples involve an omniscient being, that’s not actually necessary for the points to hold. It’s just looking at an extreme end. It would remain optimal to pay even if Omega were only 90% accurate, or 60%, etc.
As for the decision-theoretics of “what if everyone did it?” type reasoning, there’s a lot more to consider than what you’ve given. (A few relevant articles.) Most importantly, by making a choice, you’re setting the logical output of all sufficiently similar processes, not just your own.
In a world of identical beings, they would all “wake up” from any Prisoner’s Dilemma situation finding that they had both defected, or both cooperated. Viewed in this light, it makes sense to cooperate, since it will mean waking up in the pure-cooperation world, even though your decision to cooperate did not literally cause the other parties to cooperate (and even though you perceive it this way).
Making the situation more realistic does not change this conclusion either. Imagine you are positively, but not perfectly, correlated with the other beings; and that you go through thousands of PDs at once with different partners. In that case, you can defect, and wake up having found partners that cooperated. Maybe there are many such partners. However, from the fact that you regard it as optimal to always defect, it follows that you will wake up in a world with more defecting partners than if you had regarded it as optimal in such situations to cooperate.
As before, your decision does not cause others to cooperate, but it does influence what world you wake up in.
(Edit: And likewise, for the case of IP, if you defect, you will (arguably) find that you wake up in a world where you get lots of great music for free … but a fundamentally different world, that’s maybe not as pleasant as it could be...)
The bee situation you described is very similar to the parent-child problem I described: parents that don’t care for their children don’t get their genes into the next generation. And likewise, flowers that don’t give nectar don’t get their genes into the next generation. It is this gene-centeredness that can create an incentive structure/decision theory capable of such “unselfish” decisions!
Since I read your IP example a while ago, this seemed obvious to me, but I guess it should be emphasized in the text more strongly than it currently is.
But making Omega less accurate doesn’t alleviate the bizarreness of Omega. The incredible thing isn’t that Omega is accurate. It is that his “predictions” are influenced (acausaly?) by future events. Decreasing the accuracy of the predictions just makes it harder to do the experiments that shows conclusively that Omega is doing something supernatural. It doesn’t make what he does any less supernatural.
Actually, Omega’s prediction and your action are both the result of a common cause (at least under a model of the situation that meets the given problem constraints—see EY’s justification in the case of Newcomb’s problem [1].) This doesn’t require backwards-flowing causality.
See also Anna Salamon’s article about the multiple Newcomb’s problem causal models.
[1] This article. The paragraph beginning with the words “From this, I would argue, TDT follows.” goes over the constraints that lead EY to posit the causal model I just gave.
With all due respect, I have to disagree. My decision, made now, is modeled to change the output of an algorithm which, in reality, spit out its result some time ago.
Universe: Make a decision.
Me: What are my choices?
Universe: You don’t have any choices. Your response was determined long ago.
Me: Uh, so how am I supposed to decide now?
Universe: Just tell me which result you would prefer.
Me: The one that gives me the most utility.
Universe: Poof. Congratulations, you have made the best decision. Thank you for chosing to use TDT, the decision theory which makes use of the secret power of the quantum to make you rich.
Yeah, I’m being a bit unfair. but, as applied to human decision making, it still looks to me that there is causation (i.e. information) running back in time from my “free will” decision today to some “critical nexus” in the past.
Are you up-to-date on the free will sequence? Now would be a good time, as it sorts out the concepts of free will, determinism, and choice.
Because I never send someone off to read something as my response without summarizing what I except them to learn: You are still making a choice, even if you are in a deterministic world. A computer program applied to Parfit’s Hitchhiker makes a choice in basically the same sense that you make a choice when you’re in it.
With that in mind, you can actually experiment with what it’s like to be Omega. Assume that you are given the source code of a program applicable to Parfit’s Hitchhiker. You’re allowed to review it, and you decide whether to choose “rescue” based on whether you expect that the program will output “pay” after waking up, and then it runs.
In that case, the program is making a choice. You’re making a perfect prediction [1] of its choice. But where’s the reverse causation?
[1] except to the extent the program uses random predicates, in which case you figure out the probability of being paid, and if this justifies a rescue.
I’m pretty sure I have read all of the free will sequence. I am a compatibilist, and have been since before EY was born. I am quite happy with analyses that have something assumed free at one level (of reduction) and determined at another level. I still get a very bad feeling about Omega scenarios. My intuition tells me that there is some kind of mind projection fallacy being committed. But I can’t put my finger on exactly where it is.
I appreciate that the key question in any form of decision theory is how you handle the counter-factual “surgery”. I like Pearl’s rules for counter-factual surgery: If you are going to assume that some node is free, and to be modeled as controled by someone’s “free decision” rather than by its ordinary causal links, then the thing to do is to surgically sever the causal links as close to the decision node as possible. This modeling policy strikes me as simply common sense. My gut tells me that something is being done wrong when the surgery is pushed back “causally upstream” - to a point in time before the modeled “free decision”.
I understand that if we are talking about the published “decision making” source code of a robot, then the true “free decision” is actually made back there upstream in the past. And that if Omega reads the code, then he can make pretty good predictions. What I don’t understand is why the problem is not expressed this way from the beginning.
“A robot in the desert need its battery charged soon. A motorist passes by, checks the model number, looks up the robot specs online, and then drives on, knowing this robot doesn’t do reciprocity.” A nice simple story. Maybe the robot designer should have built in reciprocity. Maybe he will design differently next time. No muss, no fuss, no paradox.
I suppose there is not much point continuing to argue about it. Omega strikes me as both wrong and useless, but I am not having much luck convincing others. What I really should do is just shut up on the subject and simply cringe quietly whenever Omega’s name is mentioned.
Thanks for a good conversation on the subject, though.
I don’t know for sure—but perhaps a memetic analaysis of paradoxes might throw light on the issue:
Famous paradoxes are often the ones that cause the most confusion and discussion. Debates and arguments make for good fun and drama—and so are copied around by the participants. If you think about it that way, finding a “paradox” that is confusingly expressed may not be such a surprise.
Another example would be: why does the mirror reverse left and right but not up and down?
There, the wrong way of looking at the problem seems to be built into the question.
( Feynman’s answer ).
Because the point is to explain to the robot why it’s not getting its battery charged?
That is either profound, or it is absurd. I will have to consider it.
I’ve always assumed that the whole point of decision theory is to give normative guidance to decision makers. But in this case, I guess we have two decision makers to consider—robot and robot designer—operating at different levels of reduction and at different times. To say nothing of any decisions that may or may not be being made by this Omega fellow.
My head aches. Up to now, I have thought that we don’t need to think about “meta-decision theory”. Now I am not sure.
Mostly we want well-behaved robots—so the moral seems to be to get the robot maker to build a better robot that has a good reputation and can make credible commitments.
Hm, that robot example would actually be a better way to go about it...
I think we discussed that before—if you think you can behave unpredictably and outwit Omega, then to stay in the spirit of the problem you have to imagine you have built a deterministic robot, published its source code—and it will be visited by Omega (or maybe just an expert programmer).
I am not trying to outwit anyone. I bear Omega no ill will. I look forward to being visited by that personage.
But I really doubt that your robot problem is really “in the spirit” of the original. Because, if it is, I can’t see why the original formulation still exists.
Well, sure—for one thing, in the scenarios here, Omega is often bearing gifts!
You are supposed to treat the original formulation in the same way as the robot one, IMO. You are supposed to believe that a superbeing who knows your source code can actually exist—and that you are not being fooled or lied to.
If your problem is that you doubt that premise, then it seems appropriate to get you to consider a rearranged version of the problem—where the premise is more reasonable—otherwise you can use your scepticism to avoid considering the intended problem.
The robot formulation is more complex—and that is one reason for it not being the usual presentation of the problem. However, if you bear in mind the reason for many people here being interested in optimal decision theory in the first place, I hope you can see that it is a reasonable scenario to consider.
FWIW, much the same goes for your analysis of the hitch-hiker problem. There your analysis is even more tempting—but you are still dodging the “spirit” of the problem.
You mean that he predicts future events? That is sometimes possible to do—in cases where they are reasonbly-well determined by the current situation.
Isn’t this group selectionism? Surely the much more likely explanation is that producing more or better nectar attracts the bee to you over all the other red flowers.
I would prefer to call it kin selection, but some people might call it group selection. It is one of the few kinds of group selection that actually work.
That wasn’t part of my scenario, nor (as far as I know) biologically realistic. It is my bright red color that attracts the bee, and in this regard I am competing with my sibs. But the bee has no sense organ that can remotely detect the nectar. It has to actually land and do the pollen transfer bit before it finds out whether the nectar is really there. So, it is important that I don’t provide the color before I am ready with nectar and the sexy stuff. Else I have either wasted nectar or pissed off the bee.