Your confusion with Tegmark IV seems to remain though, so I’m glad you signaled that. This topic is analogous to Tegmark IV, in that in both cases the distinction made is essentially epiphenomenal: multiverses talk about which things “exist” or “don’t exist”, and here Bob is supposed to feel “non-existence”. The property of “existence” is meaningless, that’s the problem in both cases. When you refer to the relevant concepts (worlds, behavior of Bob’s program), you refer to all their properties, and you can’t stamp “exists” on top of that (unless the concept itself is inconsistent, say).
One can value certain concepts, and make decisions based on properties of those concepts. The concepts themselves are determined by what the decision-making algorithm is interested in.
It seems to me you’re mistaken. Multiverse theories do make predictions about what experiences we should anticipate, they’re just wrong. You haven’t yet given any real answer to the issue of pheasants, or maybe I’m a pathetic failure at parsing your posts.
Incidentally, my problem makes for a nice little test case: what experiences do you think Bob “should” anticipate in his future, assuming now we can meddle in the simulation at will? Does this question have a single correct answer? If it doesn’t, why do such questions appear to have correct answers in our world, answers which don’t require us to hypothesize random meddling gods, and does it tell us anything about how our world is different from Bob’s?
Multiverse theories do make predictions about what experiences we should anticipate, they’re just wrong.
On the contrary, multiverse theories do make predictions about subjective experience. For example, they predict what sort of subjective experience a sentient computer program should have, if any, after being halted. Some predict oddities like quantum immortality. The problem is that all observations that could shed light on the issue also require leaving the universe, making the evidence non-transferrable.
It seems I misread your comment. Sorry.
Your confusion with Tegmark IV seems to remain though, so I’m glad you signaled that. This topic is analogous to Tegmark IV, in that in both cases the distinction made is essentially epiphenomenal: multiverses talk about which things “exist” or “don’t exist”, and here Bob is supposed to feel “non-existence”. The property of “existence” is meaningless, that’s the problem in both cases. When you refer to the relevant concepts (worlds, behavior of Bob’s program), you refer to all their properties, and you can’t stamp “exists” on top of that (unless the concept itself is inconsistent, say).
One can value certain concepts, and make decisions based on properties of those concepts. The concepts themselves are determined by what the decision-making algorithm is interested in.
It seems to me you’re mistaken. Multiverse theories do make predictions about what experiences we should anticipate, they’re just wrong. You haven’t yet given any real answer to the issue of pheasants, or maybe I’m a pathetic failure at parsing your posts.
Incidentally, my problem makes for a nice little test case: what experiences do you think Bob “should” anticipate in his future, assuming now we can meddle in the simulation at will? Does this question have a single correct answer? If it doesn’t, why do such questions appear to have correct answers in our world, answers which don’t require us to hypothesize random meddling gods, and does it tell us anything about how our world is different from Bob’s?
On the contrary, multiverse theories do make predictions about subjective experience. For example, they predict what sort of subjective experience a sentient computer program should have, if any, after being halted. Some predict oddities like quantum immortality. The problem is that all observations that could shed light on the issue also require leaving the universe, making the evidence non-transferrable.