We’re not talking Pascal’s Wager here, you’re not guessing at the behaviour of capricious omnipotent beings. Omega has told you his properties, and is assumed to be trustworthy.
You are stating that. But as far as I can tell Omega is telling me its a capricious omnipotent being. If there is a distinction, I’m not seeing it. Let me break it down for you:
1) Capricious → I am completely unable to predict its actions. Yes. 2) Omnipotent → Can do the seemingly impossible. Yes.
It’s not capricious in the sense you give: you are capable of predicting some of its actions: because it’s assumed Omega is perfectly trustworthy, you can predict with certainty what it will do if it tells you what it will do.
So, if it says it’ll give you 10k$ in some condition (say, if you one-box its challenge), you can predict that it’ll give it the money if that condition arises.
If it were capricious in the sense of complete inability of being predicted, it might amputate three of your toes and give you a flower garland.
Note that the problem supposes you do have certainty that Omega is trustworthy; I see no way of reaching that epistemological state, but then again I see no way Omega could be omnipotent, either.
On an somewhat unrelated note, why would Omega ask you for 100$ if it had simulated you wouldn’t give it the money? Also, why would it do the same if it had simulated you would give it the money? What possible use would an omnipotent agent have for 100$?
And his asking you for 100$ could always be PART of the simulation.
Yes, it’s quite reasonable that if it was curious about you it would simulate you and ask the simulation a question. But once it did that, since the simulation was perfect, why would it waste the time to ask the real you? After all, in the time it takes you to understand Omega’s question it could probably simulate you many times over.
So I’m starting to think that encountering Omega is actually pretty strong evidence for the fact that you’re simulated.
Maybe Omega recognizes in advance that you might think this way, doesn’t want it to happen, and so precommits to asking the real you. With the existence of this precommitment, you may not properly make this reasoning. Moreover, you should be able to figure out that Omega would precommit, thus making it unnecessary for him to explicitlyy tell you he’s doing so.
Maybe Omega [...] doesn’t want it to happen [...] Moreover, you should be able to figure out that Omega would precommit
(Emphasis mine.)
I don’t think, given the usual problem formulation, that one can figure out what Omega wants without Omega explicitly saying it, and maybe not even in that case.
It’s a bit like a deal with a not-necessarily-evil devil. Even if it tells you something and you’re sure it’s not lying and you think you the wording is perfectly clear, you should still assign a very high probability that you have no idea what’s really going on and why.
If we assume I’m rational, then I’m not going to assume anything about Omega. I’ll base my decisions on the given evidence. So far, that appears to be described as being no more and no less than what Omega cares to tell us.
I realize this is fighting the problem, but: If I remember playing a billion rounds of the game with Omega, that is pretty strong evidence that I’m a (slightly altered) simulation. An average human takes about a ten million breaths each year...
OK, so assume that I’m a transhuman and can actually do something a billion times. But if Omega can simulate me perfectly, why would it actually waste the time to ask you a question, once it simulated you answering it? Let alone do that a billion times… This also seems like evidence that I’m actually simulated. (I notice that in most statements of the problem, the wording is such that it is implied but not clearly stated that the non-simulated version of you is ever involved.)
We’re not talking Pascal’s Wager here, you’re not guessing at the behaviour of capricious omnipotent beings. Omega has told you his properties, and is assumed to be trustworthy.
You are stating that. But as far as I can tell Omega is telling me its a capricious omnipotent being. If there is a distinction, I’m not seeing it. Let me break it down for you:
1) Capricious → I am completely unable to predict its actions. Yes.
2) Omnipotent → Can do the seemingly impossible. Yes.
So, what’s the difference?
It’s not capricious in the sense you give: you are capable of predicting some of its actions: because it’s assumed Omega is perfectly trustworthy, you can predict with certainty what it will do if it tells you what it will do.
So, if it says it’ll give you 10k$ in some condition (say, if you one-box its challenge), you can predict that it’ll give it the money if that condition arises.
If it were capricious in the sense of complete inability of being predicted, it might amputate three of your toes and give you a flower garland.
Note that the problem supposes you do have certainty that Omega is trustworthy; I see no way of reaching that epistemological state, but then again I see no way Omega could be omnipotent, either.
On an somewhat unrelated note, why would Omega ask you for 100$ if it had simulated you wouldn’t give it the money? Also, why would it do the same if it had simulated you would give it the money? What possible use would an omnipotent agent have for 100$?
Omega is assumed to be mildly bored and mildly anthropic. And his asking you for 100$ could always be PART of the simulation.
Yes, it’s quite reasonable that if it was curious about you it would simulate you and ask the simulation a question. But once it did that, since the simulation was perfect, why would it waste the time to ask the real you? After all, in the time it takes you to understand Omega’s question it could probably simulate you many times over.
So I’m starting to think that encountering Omega is actually pretty strong evidence for the fact that you’re simulated.
Maybe Omega recognizes in advance that you might think this way, doesn’t want it to happen, and so precommits to asking the real you. With the existence of this precommitment, you may not properly make this reasoning. Moreover, you should be able to figure out that Omega would precommit, thus making it unnecessary for him to explicitlyy tell you he’s doing so.
(Emphasis mine.)
I don’t think, given the usual problem formulation, that one can figure out what Omega wants without Omega explicitly saying it, and maybe not even in that case.
It’s a bit like a deal with a not-necessarily-evil devil. Even if it tells you something and you’re sure it’s not lying and you think you the wording is perfectly clear, you should still assign a very high probability that you have no idea what’s really going on and why.
If we assume I’m rational, then I’m not going to assume anything about Omega. I’ll base my decisions on the given evidence. So far, that appears to be described as being no more and no less than what Omega cares to tell us.
Fine, then interchange “assume omega is honest” with, say, “i’ve played a billiion rounds of one-box two-box with him” …It should be close enough.
I realize this is fighting the problem, but: If I remember playing a billion rounds of the game with Omega, that is pretty strong evidence that I’m a (slightly altered) simulation. An average human takes about a ten million breaths each year...
OK, so assume that I’m a transhuman and can actually do something a billion times. But if Omega can simulate me perfectly, why would it actually waste the time to ask you a question, once it simulated you answering it? Let alone do that a billion times… This also seems like evidence that I’m actually simulated. (I notice that in most statements of the problem, the wording is such that it is implied but not clearly stated that the non-simulated version of you is ever involved.)