So I may prefer to take into account preferences of my sims when deciding, because I may end up in a situation in which my fate would be decided by my sims who use the same decision algorithm. And if Omega tells me that he has created n simulations of me which have completely equal experiences that I have had until now, including the dialogue with Omega, I should assume 1/n probability of not being one of those simulations.
Basically, yes. This is what EY called “symmetrism” in 3 Worlds collide and Greg Egan described in one of his short stories. Basically, a more sophisticated version of “do unto others...”.
If this is the point, I object to the way it is conveyed by the post.
First, its name somehow suggests that it’s about value while the problem is rather one of game theory. (One may make a case for integrating the symmetric preferences among one’s terminal values but it isn’t the only possible solution.)
Second, thought experiments should limit the counter-intuitive elements to the necessary minimum. We may need to have simulations here, but why thousand years of torture and 2^^^^3 simulations? These things are unnecessarily distracting from the main point, if the main point isn’t about scope insensitivity but it is instead what you think it is.
Third and most importantly: In similar thought experiments, Omega is assumed to be completely trustworthy. But it is not trustworthy towards the simulations—it too tells them that it is going to simulate them and torture the (second-order) simulations depending on their (the first-order simulations) decision, but it isn’t true: There are no second-order simulations and the first order simulations are going to be tortured based on the decision of the unsimulated participant. So it is, if the participant accepts anthropic reasoning for this case, p = 1/n that he is “real” and p = (n-1)/n that (he is simulated and Omega isn’t trustworthy). If, on the other hand, Omega didn’t tell the simulations the same thing as it has told to the “real” person, what Omega has told could be used to discriminate between the simulated and real case and the anthropic reasoning leading to the conclusion that one is likely a simulation wouldn’t apply.
In short, taking into consideration that I may be a simulation that Omega is speaking about is incoherent without considering that Omega may be lying. There may be clever reformulations that avoid this problem, but I don’t see any at the moment.
I create a perfect simulation of you, and run it through a thousand simulated years of horrific torture (which will take my hypercomputer all of a billionth of a second to run), after which I delete the simulation and hand you a box with a billion dollars in it.
My interpretation of your interpretation, which you have said is basically right:
And if Omega tells me that he has created n simulations of me which have completely equal experiences that I have had until now, including the dialogue with Omega, I should assume 1/n probability of not being one of those simulations.
So, we have Omega telling the simulations that it is going to give them box with billion dollars (if they choose what they choose) and instead tortures them and then deletes them. This is an explicit lie, isn’t it? Moreover, Omega tells the simulations that they would be simulated, but unless Omega can create an infinite regress of simulations of simulations (which I consider obviously impossible), at least some of the simulations aren’t simulated in violation of Omega’s promise to them.
I create a perfect simulation of you, and run it through a thousand simulated years of horrific torture (which will take my hypercomputer all of a billionth of a second to run), after which I delete the simulation and hand you a box with a billion dollars in it.
My interpretation of your interpretation, which you have said is basically right:
And if Omega tells me that he has created n simulations of me which have completely equal experiences that I have had until now, including the dialogue with Omega, I should assume 1/n probability of not being one of those simulations.
So, we have Omega telling the simulations that it is going to give them box with billion dollars (if they choose what they choose) and instead tortures them and then deletes them. This is an explicit lie, isn’t it? Moreover, Omega tells the simulations that they would be simulated, but unless Omega can create an infinite regress of simulations of simulations (which I consider obviously impossible), at least some of the simulations aren’t simulated in violation of Omega’s promise to them.
So I may prefer to take into account preferences of my sims when deciding, because I may end up in a situation in which my fate would be decided by my sims who use the same decision algorithm. And if Omega tells me that he has created n simulations of me which have completely equal experiences that I have had until now, including the dialogue with Omega, I should assume 1/n probability of not being one of those simulations.
Is that it?
Basically, yes. This is what EY called “symmetrism” in 3 Worlds collide and Greg Egan described in one of his short stories. Basically, a more sophisticated version of “do unto others...”.
If this is the point, I object to the way it is conveyed by the post.
First, its name somehow suggests that it’s about value while the problem is rather one of game theory. (One may make a case for integrating the symmetric preferences among one’s terminal values but it isn’t the only possible solution.)
Second, thought experiments should limit the counter-intuitive elements to the necessary minimum. We may need to have simulations here, but why thousand years of torture and 2^^^^3 simulations? These things are unnecessarily distracting from the main point, if the main point isn’t about scope insensitivity but it is instead what you think it is.
Third and most importantly: In similar thought experiments, Omega is assumed to be completely trustworthy. But it is not trustworthy towards the simulations—it too tells them that it is going to simulate them and torture the (second-order) simulations depending on their (the first-order simulations) decision, but it isn’t true: There are no second-order simulations and the first order simulations are going to be tortured based on the decision of the unsimulated participant. So it is, if the participant accepts anthropic reasoning for this case, p = 1/n that he is “real” and p = (n-1)/n that (he is simulated and Omega isn’t trustworthy). If, on the other hand, Omega didn’t tell the simulations the same thing as it has told to the “real” person, what Omega has told could be used to discriminate between the simulated and real case and the anthropic reasoning leading to the conclusion that one is likely a simulation wouldn’t apply.
In short, taking into consideration that I may be a simulation that Omega is speaking about is incoherent without considering that Omega may be lying. There may be clever reformulations that avoid this problem, but I don’t see any at the moment.
I reread the OP, and, while it could be stated better, I did not see any obvious less told by Omega, except maybe less by omission.
From the OP:
My interpretation of your interpretation, which you have said is basically right:
So, we have Omega telling the simulations that it is going to give them box with billion dollars (if they choose what they choose) and instead tortures them and then deletes them. This is an explicit lie, isn’t it? Moreover, Omega tells the simulations that they would be simulated, but unless Omega can create an infinite regress of simulations of simulations (which I consider obviously impossible), at least some of the simulations aren’t simulated in violation of Omega’s promise to them.
From the OP:
My interpretation of your interpretation, which you have said is basically right:
So, we have Omega telling the simulations that it is going to give them box with billion dollars (if they choose what they choose) and instead tortures them and then deletes them. This is an explicit lie, isn’t it? Moreover, Omega tells the simulations that they would be simulated, but unless Omega can create an infinite regress of simulations of simulations (which I consider obviously impossible), at least some of the simulations aren’t simulated in violation of Omega’s promise to them.