Same answer I give for all other cases of software life: our ability to run Bob is more resilient against information theoretic death. So as long as we store enough to start him from where he left off, he never feels death, and we have met our moral obligations to him.
Same answer I give for all other cases of software life: our ability to run Bob is more resilient against information theoretic death. So as long as we store enough to start him from where he left off, he never feels death, and we have met our moral obligations to him.
Bah, he can’t feel that we don’t run him. Whether we should run him is a question of optimizing the moral value of our world, not of determining his subjective perception. What Bob feels is a property completely determined by the initial conditions of the simulation, and doesn’t (generally) depend on whether he gets implemented in any given world.
You believe in Tegmark IV then? How do you reconcile it with my recent argument against it? Your use of “preference” looks like a get out of jail free card: it can “explain” any sequence of observations by claiming that you only “care” about a specific subset of worlds.
Don’t see how Tegmark IV is relevant here (or indeed relevant anywhere: it doesn’t say anything!). My comment was against expecting Bob to have epiphenomenal feelings: if it’s not something already in his program (which takes no input), then he can’t possibly experience it.
Your confusion with Tegmark IV seems to remain though, so I’m glad you signaled that. This topic is analogous to Tegmark IV, in that in both cases the distinction made is essentially epiphenomenal: multiverses talk about which things “exist” or “don’t exist”, and here Bob is supposed to feel “non-existence”. The property of “existence” is meaningless, that’s the problem in both cases. When you refer to the relevant concepts (worlds, behavior of Bob’s program), you refer to all their properties, and you can’t stamp “exists” on top of that (unless the concept itself is inconsistent, say).
One can value certain concepts, and make decisions based on properties of those concepts. The concepts themselves are determined by what the decision-making algorithm is interested in.
It seems to me you’re mistaken. Multiverse theories do make predictions about what experiences we should anticipate, they’re just wrong. You haven’t yet given any real answer to the issue of pheasants, or maybe I’m a pathetic failure at parsing your posts.
Incidentally, my problem makes for a nice little test case: what experiences do you think Bob “should” anticipate in his future, assuming now we can meddle in the simulation at will? Does this question have a single correct answer? If it doesn’t, why do such questions appear to have correct answers in our world, answers which don’t require us to hypothesize random meddling gods, and does it tell us anything about how our world is different from Bob’s?
Multiverse theories do make predictions about what experiences we should anticipate, they’re just wrong.
On the contrary, multiverse theories do make predictions about subjective experience. For example, they predict what sort of subjective experience a sentient computer program should have, if any, after being halted. Some predict oddities like quantum immortality. The problem is that all observations that could shed light on the issue also require leaving the universe, making the evidence non-transferrable.
Okay next question. Our understanding of the cellular automaton has advanced to the point where we can change one spot of Bob’s world, at one specific moment in time, without being too afraid of harming Bob. It will have ripple effects and change the swamp around him slightly, though. So now we have 10^30 possible slightly-different potential futures for Bob. He will probably be happy in the overwhelming majority of them. How many should we run to fulfill our moral utility function of making sentients happy?
Okay, point taken. The answer depends on how (one believes) the social utility function responds to new instantiations of sentients that are very similar to existing ones. But in any case, you would be obligated to preserve re-instantiation capability of any already-created being.
The answer depends on how (one believes) the social utility function responds to new instantiations of sentients that are very similar to existing ones.
I don’t think that creation of new sentients, in and of itself, has an impact on the (my) SUF. It only has an impact to the extent that their creators value them and others disvalue such new beings.
Same answer I give for all other cases of software life: our ability to run Bob is more resilient against information theoretic death. So as long as we store enough to start him from where he left off, he never feels death, and we have met our moral obligations to him.
(First LW post from my first smartphone btw.)
Bah, he can’t feel that we don’t run him. Whether we should run him is a question of optimizing the moral value of our world, not of determining his subjective perception. What Bob feels is a property completely determined by the initial conditions of the simulation, and doesn’t (generally) depend on whether he gets implemented in any given world.
You believe in Tegmark IV then? How do you reconcile it with my recent argument against it? Your use of “preference” looks like a get out of jail free card: it can “explain” any sequence of observations by claiming that you only “care” about a specific subset of worlds.
Don’t see how Tegmark IV is relevant here (or indeed relevant anywhere: it doesn’t say anything!). My comment was against expecting Bob to have epiphenomenal feelings: if it’s not something already in his program (which takes no input), then he can’t possibly experience it.
It seems I misread your comment. Sorry.
Your confusion with Tegmark IV seems to remain though, so I’m glad you signaled that. This topic is analogous to Tegmark IV, in that in both cases the distinction made is essentially epiphenomenal: multiverses talk about which things “exist” or “don’t exist”, and here Bob is supposed to feel “non-existence”. The property of “existence” is meaningless, that’s the problem in both cases. When you refer to the relevant concepts (worlds, behavior of Bob’s program), you refer to all their properties, and you can’t stamp “exists” on top of that (unless the concept itself is inconsistent, say).
One can value certain concepts, and make decisions based on properties of those concepts. The concepts themselves are determined by what the decision-making algorithm is interested in.
It seems to me you’re mistaken. Multiverse theories do make predictions about what experiences we should anticipate, they’re just wrong. You haven’t yet given any real answer to the issue of pheasants, or maybe I’m a pathetic failure at parsing your posts.
Incidentally, my problem makes for a nice little test case: what experiences do you think Bob “should” anticipate in his future, assuming now we can meddle in the simulation at will? Does this question have a single correct answer? If it doesn’t, why do such questions appear to have correct answers in our world, answers which don’t require us to hypothesize random meddling gods, and does it tell us anything about how our world is different from Bob’s?
On the contrary, multiverse theories do make predictions about subjective experience. For example, they predict what sort of subjective experience a sentient computer program should have, if any, after being halted. Some predict oddities like quantum immortality. The problem is that all observations that could shed light on the issue also require leaving the universe, making the evidence non-transferrable.
Okay next question. Our understanding of the cellular automaton has advanced to the point where we can change one spot of Bob’s world, at one specific moment in time, without being too afraid of harming Bob. It will have ripple effects and change the swamp around him slightly, though. So now we have 10^30 possible slightly-different potential futures for Bob. He will probably be happy in the overwhelming majority of them. How many should we run to fulfill our moral utility function of making sentients happy?
Okay, point taken. The answer depends on how (one believes) the social utility function responds to new instantiations of sentients that are very similar to existing ones. But in any case, you would be obligated to preserve re-instantiation capability of any already-created being.
How does yours?
I don’t think that creation of new sentients, in and of itself, has an impact on the (my) SUF. It only has an impact to the extent that their creators value them and others disvalue such new beings.
He never feels death if we just stop the simulation either.