Suppose Omega appears to you and says that you’re living in a deterministic simulation. (Apparent quantum randomness is coming from a pseudorandom number generator.) He gives you two options:
A. He’ll create an FAI inside the simulation to serve you, essentially turning the simulation into your personal heaven. B. He’ll do A, and also make a billion identical copies of the unmodified simulation (without the FAI), and scatter them around the outside universe. (He tells you that in the unmodified simulation, no FAI will be invented by humanity, so you’ll just live an ordinary human life and then die.)
Is it obvious that choosing A is “crazy”? (Call this thought experiment 1.)
Suppose someone chooses A, and then Omega says “Whoops, I forgot to tell you that the billion copies of the unmodified simulation have already been created, just a few minutes ago, and I’m actually appearing to you in all of them. If you choose A, I’ll just instantly shut off the extra copies. Do you still want A?” (Call this thought experiment 2.)
Thought experiment 1 seems essentially the same as thought experiment 2, which seems similar to quantum suicide. (In other words, I agree with Carl that the situation is not actually as clear as you make it out to be.)
Is it obvious that running in many instances is better compared to being active on just one “thread”?
It is not empty, sad, but still existing worlds that we are talking about here, but just about creating lots of less happy copies of yourself (in fact, the two versions of the experiment only differ in a bit of added redundancy in the first few minutes of the computation, I think this doesn’t count as “existence”).
Alternative experiment: you have to possibility to make a copy of yourself, who will have living conditions of the same quality as you will. Furthermore, you stay conscious during the process, so the copy won’t be “you”.… do you agree to the process?
And what if the copy would have −40 IQ and much worse living conditions compared to you?
I think the problem here is that our utility functions (conditional on such thing actually existing) doesn’t seem to be consistent when considering copying living entities… (as not creating a copy and killing it later are sometimes identical operations, but they seem to be very different to our intuitions). It is just only additional complexity if the questions are asked about yourself or in a QM world.
I will assume we are only considering the well being of the possible people, not their outward consequences of their existence because that simplifies things and that seems to be implicit here.
Alternative experiment: you have to possibility to make a copy of yourself, who will have living conditions of the same quality as you will. Furthermore, you stay conscious during the process, so the copy won’t be “you”.… do you agree to the process?
Yes.
And what if the copy would have −40 IQ and much worse living conditions compared to you?
As long as his life will be better than not living, yes. It seems strange to want a being not to exist if ey will enjoy eir life, ceteris paribus.
I think the problem here is that our utility functions (conditional on such thing actually existing) doesn’t seem to be consistent when considering copying living entities… (as not creating a copy and killing it later are sometimes identical operations, but they seem to be very different to our intuitions).
I have tentatively bitten the bullet and decided to consider them equivalent. Death is only bad because of the life that otherwise could have been lived.
Doesn’t this lead to requiring support for increasing human population as much as possible, up to the point where resources-per-person makes life just barely more pleasant than not living, but no more so?
Suppose Omega appears to you and says that you’re living in a deterministic simulation. (Apparent quantum randomness is coming from a pseudorandom number generator.) He gives you two options:
A. He’ll create an FAI inside the simulation to serve you, essentially turning the simulation into your personal heaven.
B. He’ll do A, and also make a billion identical copies of the unmodified simulation (without the FAI), and scatter them around the outside universe. (He tells you that in the unmodified simulation, no FAI will be invented by humanity, so you’ll just live an ordinary human life and then die.)
Is it obvious that choosing A is “crazy”? (Call this thought experiment 1.)
Suppose someone chooses A, and then Omega says “Whoops, I forgot to tell you that the billion copies of the unmodified simulation have already been created, just a few minutes ago, and I’m actually appearing to you in all of them. If you choose A, I’ll just instantly shut off the extra copies. Do you still want A?” (Call this thought experiment 2.)
Thought experiment 1 seems essentially the same as thought experiment 2, which seems similar to quantum suicide. (In other words, I agree with Carl that the situation is not actually as clear as you make it out to be.)
It seems crazy to me, though I don’t think that it is too unlikely that someone will be able to establish it with an argument I haven’t heard.
Is it obvious that running in many instances is better compared to being active on just one “thread”?
It is not empty, sad, but still existing worlds that we are talking about here, but just about creating lots of less happy copies of yourself (in fact, the two versions of the experiment only differ in a bit of added redundancy in the first few minutes of the computation, I think this doesn’t count as “existence”).
Alternative experiment: you have to possibility to make a copy of yourself, who will have living conditions of the same quality as you will. Furthermore, you stay conscious during the process, so the copy won’t be “you”.… do you agree to the process?
And what if the copy would have −40 IQ and much worse living conditions compared to you?
I think the problem here is that our utility functions (conditional on such thing actually existing) doesn’t seem to be consistent when considering copying living entities… (as not creating a copy and killing it later are sometimes identical operations, but they seem to be very different to our intuitions). It is just only additional complexity if the questions are asked about yourself or in a QM world.
I will assume we are only considering the well being of the possible people, not their outward consequences of their existence because that simplifies things and that seems to be implicit here.
Yes.
As long as his life will be better than not living, yes. It seems strange to want a being not to exist if ey will enjoy eir life, ceteris paribus.
I have tentatively bitten the bullet and decided to consider them equivalent. Death is only bad because of the life that otherwise could have been lived.
Doesn’t this lead to requiring support for increasing human population as much as possible, up to the point where resources-per-person makes life just barely more pleasant than not living, but no more so?