I think the point, if there is a point, is that you will almost certainly never causally interact with that planet-sized ball of daisies, so it is a waste of resources to spend too much time thinking about it.
Another way of saying it is this. Case A is that the universe is finite and that cows spontaneously turn into chickens with probability epsilon. Case B is that the universe is infinite and epsilon of all cows actually turn into chickens. In both Case A and Case B your experiences are exactly the same. Your expectation of observing a cow turning into a chicken is the same.
This ties into the long-running debate over waterfall ethics which I think of as important from an FAI perspective. When you’re thinking about the moral value of the notional torture of humans simulated by the encoded movements of sea snails on a rock, at some point you have to ask yourself not, “Does this ‘torture’ have moral value?” but “Am I sufficiently causally entangled with this ‘torture’ that it has moral relevance to me?”
Case A is that the universe is finite and that cows spontaneously turn into chickens with probability epsilon. Case B is that the universe is infinite and epsilon of all cows actually turn into chickens. In both Case A and Case B your experiences are exactly the same. Your expectation of observing a cow turning into a chicken is the same.
Not quite true… This only follows in Case B if you assume a principle of mediocrity i.e. that you are a “typical” observer. But that assumption leads to known problems in an infinite universe e.g. it implies a strong form of Doomsday argument, where only a tiny fraction of civilisations survive and become space-colonizing. (Otherwise, you’d expect to be part of a long-lived, space-colonizing civilisation of huge population, rather than still on the planet of origin of your civilisation with only a few billion population).
If you DON’T assume a principle of mediocrity, then Case B makes extremely weak predictions about what you should observe : you might, for all you know, be part of the epsilon who observe weird things at some point in their lives.
But that assumption leads to known problems in an infinite universe e.g. it implies a strong form of Doomsday argument, where only a tiny fraction of civilisations survive and become space-colonizing.
What is the problem with this argument, besides an unpleasant conclusion? We haven’t actually seen any space-colonizing civilizations, and we would expect to see at least some unless we were either one of the first civilizations in our light cone or if most civilizations weren’t spacefaring (possibly because they don’t survive that long).
Well the main problem is the sheer severity of the Doomsday effect.
Suppose space-colonizing civilizations have an average population a billion times that of “doomed” civilizations, then the mediocrity argument implies that fewer than 1 in a billion civilizations become space-colonizing. If the population ratio is a trillion, then fewer than 1 in a trillion become space-colonizing.
But there are something like 10^22 stars in the observable universe, and a space-colonizing civilization could reach a very large portion of them; further, it would tend to do so, if there is no real competition from other colonizing civilizations (the competition would instead be arising at the edge of the expansion wave, causing travel speeds to increase and approach the speed of light). So the most likely population increase factor is something like a billion trillion or more, implying a chance of civilization survival of 1 in a billion trillion or less. That does seem unreasonably pessimistic.
“I don’t care about whatever cannot causally interact with me”, that’s the theme of the answers so far. Yet, we are invested in deciding among unfalsifiable propositions in many issues. E.g., even if there may never be an experiment to tell apart MWI from some non-MWI variants, if you’ll never causally be influenced by that difference, plenty of passions and words are spent on that. Knowledge in itself can be a terminal value, and it is for many, even if that knowledge has no utility in so far as it does not influence your actions.
If you know there is an actual (insert your favorite SciFi franchise)-themed Hubble volume out there, and all that separates from you is a lot of space, would that not be worth contemplating?
I think the point, if there is a point, is that you will almost certainly never causally interact with that planet-sized ball of daisies, so it is a waste of resources to spend too much time thinking about it.
Another way of saying it is this. Case A is that the universe is finite and that cows spontaneously turn into chickens with probability epsilon. Case B is that the universe is infinite and epsilon of all cows actually turn into chickens. In both Case A and Case B your experiences are exactly the same. Your expectation of observing a cow turning into a chicken is the same.
This ties into the long-running debate over waterfall ethics which I think of as important from an FAI perspective. When you’re thinking about the moral value of the notional torture of humans simulated by the encoded movements of sea snails on a rock, at some point you have to ask yourself not, “Does this ‘torture’ have moral value?” but “Am I sufficiently causally entangled with this ‘torture’ that it has moral relevance to me?”
Not quite true… This only follows in Case B if you assume a principle of mediocrity i.e. that you are a “typical” observer. But that assumption leads to known problems in an infinite universe e.g. it implies a strong form of Doomsday argument, where only a tiny fraction of civilisations survive and become space-colonizing. (Otherwise, you’d expect to be part of a long-lived, space-colonizing civilisation of huge population, rather than still on the planet of origin of your civilisation with only a few billion population).
If you DON’T assume a principle of mediocrity, then Case B makes extremely weak predictions about what you should observe : you might, for all you know, be part of the epsilon who observe weird things at some point in their lives.
What is the problem with this argument, besides an unpleasant conclusion? We haven’t actually seen any space-colonizing civilizations, and we would expect to see at least some unless we were either one of the first civilizations in our light cone or if most civilizations weren’t spacefaring (possibly because they don’t survive that long).
Well the main problem is the sheer severity of the Doomsday effect.
Suppose space-colonizing civilizations have an average population a billion times that of “doomed” civilizations, then the mediocrity argument implies that fewer than 1 in a billion civilizations become space-colonizing. If the population ratio is a trillion, then fewer than 1 in a trillion become space-colonizing.
But there are something like 10^22 stars in the observable universe, and a space-colonizing civilization could reach a very large portion of them; further, it would tend to do so, if there is no real competition from other colonizing civilizations (the competition would instead be arising at the edge of the expansion wave, causing travel speeds to increase and approach the speed of light). So the most likely population increase factor is something like a billion trillion or more, implying a chance of civilization survival of 1 in a billion trillion or less. That does seem unreasonably pessimistic.
Fair enough, and on reflection I agree that those kind of survival odds are unreasonably pessimistic given the information we currently have.
“I don’t care about whatever cannot causally interact with me”, that’s the theme of the answers so far. Yet, we are invested in deciding among unfalsifiable propositions in many issues. E.g., even if there may never be an experiment to tell apart MWI from some non-MWI variants, if you’ll never causally be influenced by that difference, plenty of passions and words are spent on that. Knowledge in itself can be a terminal value, and it is for many, even if that knowledge has no utility in so far as it does not influence your actions.
If you know there is an actual (insert your favorite SciFi franchise)-themed Hubble volume out there, and all that separates from you is a lot of space, would that not be worth contemplating?
Incidentally, do you believe that?