The difficulty I am having here is not so much that the stated nature of the problem is not real so much that it is asking one to assume they are irrational. With a .999999999c spaceship it is not irrational to assume one is in a trolley on a space ship if one is in a trolley on a space ship. There is not enough information in the Omega puzzle as it assumes you, the person it drops the boxes in front of, know that omega is predicting, but does not tell you how you know that. As the mental state ‘knowing it is predicting’ is fundamental to the puzzle, not knowing how one came to that conclusion asks you to be a magical thinker for the purpose of the puzzle. I believe that this may at least partially explain why there seems to be a lack of consensus.
I also am suspicious of the ambiguous nature of the word predict, but am having trouble phrasing the issue. Omega may be using astrology and happen to have been right each of 100 times, or be literally looking forward in time. Without knowing how can one make the best choice?
All that said taking just B is my plan, as with $1,000,000 I can afford to lose $1,000.
I agree that I can’t imagine any justified way of coming to believe Omega has the properties that I am presumed to believe Omega to have. So, yes, the thought experiment either assumes that I’ve arrived at that state in some unjustified way (as you say, assume I’m irrational, at least sometimes) or that I’ve arrived at it in some justified way I currently have no inkling of (and therefore cannot currently imagine).
Assuming that I’m irrational sometimes, and sometimes therefore arrive at beliefs that aren’t justified, isn’t too difficult for me; I have a lot of experience with doing that. (Far more experience than I have with riding a trolley on a spaceship, come to that.)
But, sure, I can see where people whose experience doesn’t include that, or whose self-image rejects it regardless of their experience, or who otherwise have trouble imagining themselves arriving at beliefs that aren’t rationally justified, might balk at that step.
Without knowing how can one make the best choice?
If by “best choice” we mean the choice that has the best possible results, then in this case we either cannot make the best choice except by accident, or we always make the best choice, depending on whether the things that didn’t in fact happen were possible before they didn’t happen, which there’s no particular reason to believe.
If by “best choice” we mean the choice that has the highest expected value given what we know when we make it, then we make the best choice by evaluating what we know.
Thanks, that does help a little, though I should say that I am pretty sure I hold a number of irrational beliefs that I am yet to excise. Assuming that Omega literally implanted the idea into my head is a different thought experiment to Omega turned out to be predicting is different to Omega saying that it predicted the result etc. Until I know how and why I know it is predicting the result I am not sure how I would act in the real case. How Omega told me that I was only allowed to pick box a and b or just b may or may not be helpful but either way not as important as how I know it is predicting.
Edit. There seem to be a number of thought experiments wherein I have an irrational belief that I can more accuratly mentally model, like how I may behave if I thought that I was the King of England. Now I am wondering what about this specific problem is giving me trouble.
Until I know how and why I know it is predicting the result I am not sure how I would act in the real case.
Fair enough.
For my own part, I find that I often act on my beliefs in a situation without stopping to consider what my basis for those beliefs is, so it’s not too difficult for me to imagine acting on my posited beliefs about Omega’s predictive ability while ignoring the question of where those beliefs came from. I simply accept, for the sake of the exercise, that I do believe it and act accordingly.
Another way of looking at it you might find helpful is to leave aside altogether the question of what I would or wouldn’t do, and what I can and can’t believe, and instead ask what the right thing to do would be were this the actual situation.
E.g., if you give me a device that is indistinguishable from a revolver, but is designed in such a way that placing it to my temple and firing the trigger doesn’t put a bullet in my skull but instead causes Vast Quantities of Really Good Stuff to happen, the right thing to do is put the device to my temple and fire the trigger. I won’t actually do that, because I have no way of knowing what the device actually does, but whether I do it or not it’s the right thing to do.
Thank you. By depersonalising the question it makes it easier for me to think about. If do you take one box or two becomes should one take one box or two… I am still confused. I’m confident that just box B should be taken, but I think that I need information that is implied to exist but is not presented in the problem to be able to give a correct answer. Namely the nature of the predictions Omega has made.
With the problem as stated I do not see how one could tell if Omega got lucky 100 times with a flawed system, or if it has a deterministic or causality breaking process that it follows.
One thing I would say is that picking B the most you could lose is 1000 dollars if B is empty. Picking A and B the most you could gain over just B is 1000 dollars. Is it worth betting a reasonable chance at $1,000,000 for a $1,000 gain if you beat a computer at a game 100 people failed to beat it at, especially if it is a game you more or less axiomatically do not understand how it is playing?
Sorry, I am having difficulty explaining as I am not sure what it is I am trying to get across, I lack the words. I am having trouble with the use of the word predict, as it could imply any number of methods of prediction, and some of those methods change the answer you should give.
For example if it was predicting by the colour of the player’s shoes it may have a micron over 50% chance of being right, and just happened to have been correct the 100 times you heard of. In that case one should take a and b, if, on the other hand, it was a visitor from a higher matrix, and got its answer by simulating you perfectly and at fast forward, then whatever you want to take is the best option and in my case that is B. If it is breaking causality by looking through a window into the future, then take box B. My answers are conditional on information I do not have. I am having trouble mentally modelling this situation without assuming one of these cases to be true.
This seems a bizarre way of thinking about it, to me. It’s as though you’d said “suppose there’s someone walking past Sam in the street, and Sam can shoot and kill them, ought Sam do it?” and I’d replied “well, I need to know how reliable a shot Sam is. If Sam’s odds of hitting the person are low enough, then it’s OK. And that depends on the make of gun, and how much training Sam has had, and...”
I mean, sure, in the real world, those are perhaps relevant factors (and perhaps not). But you’ve already told me to suppose that Sam can shoot and kill the passerby. If I assume that (which in the real world I would not be justified in simply assuming without evidence), the make of the gun no longer matters.
Similarly, I agree that if all I know is that Omega was right in 100 trials that I’ve heard of, I should lend greater credence to the hypothesis that there were >>100 trials, the successful 100 were cherrypicked, and Omega is not a particularly reliable predictor. This falls into the same category as assuming Omega is simply lying… sure, it’s highest-expected-value thing to do in an analogous situation that I might actually find myself in, but that’s different from what the problem assumes.
The problem assumes that I know Omega has an N% prediction rate. If I’m going to engage with the problem, I have to make that assumption. If I am unable to make that assumption, and instead make various other assumptions that are different, then I am unable to engage with the problem.
Which is OK… engaging with Newcombe’s problem is not a particularly important thing to be able to do. If I’m unable to do it, I can still lead a fulfilling life.
The difficulty I am having here is not so much that the stated nature of the problem is not real so much that it is asking one to assume they are irrational. With a .999999999c spaceship it is not irrational to assume one is in a trolley on a space ship if one is in a trolley on a space ship. There is not enough information in the Omega puzzle as it assumes you, the person it drops the boxes in front of, know that omega is predicting, but does not tell you how you know that. As the mental state ‘knowing it is predicting’ is fundamental to the puzzle, not knowing how one came to that conclusion asks you to be a magical thinker for the purpose of the puzzle. I believe that this may at least partially explain why there seems to be a lack of consensus.
I also am suspicious of the ambiguous nature of the word predict, but am having trouble phrasing the issue. Omega may be using astrology and happen to have been right each of 100 times, or be literally looking forward in time. Without knowing how can one make the best choice?
All that said taking just B is my plan, as with $1,000,000 I can afford to lose $1,000.
I agree that I can’t imagine any justified way of coming to believe Omega has the properties that I am presumed to believe Omega to have. So, yes, the thought experiment either assumes that I’ve arrived at that state in some unjustified way (as you say, assume I’m irrational, at least sometimes) or that I’ve arrived at it in some justified way I currently have no inkling of (and therefore cannot currently imagine).
Assuming that I’m irrational sometimes, and sometimes therefore arrive at beliefs that aren’t justified, isn’t too difficult for me; I have a lot of experience with doing that. (Far more experience than I have with riding a trolley on a spaceship, come to that.)
But, sure, I can see where people whose experience doesn’t include that, or whose self-image rejects it regardless of their experience, or who otherwise have trouble imagining themselves arriving at beliefs that aren’t rationally justified, might balk at that step.
If by “best choice” we mean the choice that has the best possible results, then in this case we either cannot make the best choice except by accident, or we always make the best choice, depending on whether the things that didn’t in fact happen were possible before they didn’t happen, which there’s no particular reason to believe.
If by “best choice” we mean the choice that has the highest expected value given what we know when we make it, then we make the best choice by evaluating what we know.
Thanks, that does help a little, though I should say that I am pretty sure I hold a number of irrational beliefs that I am yet to excise. Assuming that Omega literally implanted the idea into my head is a different thought experiment to Omega turned out to be predicting is different to Omega saying that it predicted the result etc. Until I know how and why I know it is predicting the result I am not sure how I would act in the real case. How Omega told me that I was only allowed to pick box a and b or just b may or may not be helpful but either way not as important as how I know it is predicting.
Edit. There seem to be a number of thought experiments wherein I have an irrational belief that I can more accuratly mentally model, like how I may behave if I thought that I was the King of England. Now I am wondering what about this specific problem is giving me trouble.
Fair enough.
For my own part, I find that I often act on my beliefs in a situation without stopping to consider what my basis for those beliefs is, so it’s not too difficult for me to imagine acting on my posited beliefs about Omega’s predictive ability while ignoring the question of where those beliefs came from. I simply accept, for the sake of the exercise, that I do believe it and act accordingly.
Another way of looking at it you might find helpful is to leave aside altogether the question of what I would or wouldn’t do, and what I can and can’t believe, and instead ask what the right thing to do would be were this the actual situation.
E.g., if you give me a device that is indistinguishable from a revolver, but is designed in such a way that placing it to my temple and firing the trigger doesn’t put a bullet in my skull but instead causes Vast Quantities of Really Good Stuff to happen, the right thing to do is put the device to my temple and fire the trigger. I won’t actually do that, because I have no way of knowing what the device actually does, but whether I do it or not it’s the right thing to do.
Thank you. By depersonalising the question it makes it easier for me to think about. If do you take one box or two becomes should one take one box or two… I am still confused. I’m confident that just box B should be taken, but I think that I need information that is implied to exist but is not presented in the problem to be able to give a correct answer. Namely the nature of the predictions Omega has made.
With the problem as stated I do not see how one could tell if Omega got lucky 100 times with a flawed system, or if it has a deterministic or causality breaking process that it follows.
One thing I would say is that picking B the most you could lose is 1000 dollars if B is empty. Picking A and B the most you could gain over just B is 1000 dollars. Is it worth betting a reasonable chance at $1,000,000 for a $1,000 gain if you beat a computer at a game 100 people failed to beat it at, especially if it is a game you more or less axiomatically do not understand how it is playing?
Mm. I’m not really understanding your thinking here.
Sorry, I am having difficulty explaining as I am not sure what it is I am trying to get across, I lack the words. I am having trouble with the use of the word predict, as it could imply any number of methods of prediction, and some of those methods change the answer you should give.
For example if it was predicting by the colour of the player’s shoes it may have a micron over 50% chance of being right, and just happened to have been correct the 100 times you heard of. In that case one should take a and b, if, on the other hand, it was a visitor from a higher matrix, and got its answer by simulating you perfectly and at fast forward, then whatever you want to take is the best option and in my case that is B. If it is breaking causality by looking through a window into the future, then take box B. My answers are conditional on information I do not have. I am having trouble mentally modelling this situation without assuming one of these cases to be true.
This seems a bizarre way of thinking about it, to me. It’s as though you’d said “suppose there’s someone walking past Sam in the street, and Sam can shoot and kill them, ought Sam do it?” and I’d replied “well, I need to know how reliable a shot Sam is. If Sam’s odds of hitting the person are low enough, then it’s OK. And that depends on the make of gun, and how much training Sam has had, and...”
I mean, sure, in the real world, those are perhaps relevant factors (and perhaps not). But you’ve already told me to suppose that Sam can shoot and kill the passerby. If I assume that (which in the real world I would not be justified in simply assuming without evidence), the make of the gun no longer matters.
Similarly, I agree that if all I know is that Omega was right in 100 trials that I’ve heard of, I should lend greater credence to the hypothesis that there were >>100 trials, the successful 100 were cherrypicked, and Omega is not a particularly reliable predictor. This falls into the same category as assuming Omega is simply lying… sure, it’s highest-expected-value thing to do in an analogous situation that I might actually find myself in, but that’s different from what the problem assumes.
The problem assumes that I know Omega has an N% prediction rate. If I’m going to engage with the problem, I have to make that assumption. If I am unable to make that assumption, and instead make various other assumptions that are different, then I am unable to engage with the problem.
Which is OK… engaging with Newcombe’s problem is not a particularly important thing to be able to do. If I’m unable to do it, I can still lead a fulfilling life.