To our perspective, this is from (2): all advanced civilizations die off in massive industrial accidents; God alone knows what they thought they were trying to accomplish.
Also, wouldn’t there still be people who chose to stay behind? Unless we’re talking about something that blows up entire solar systems, it would remain possible for members of the advanced civilization to opt out of this very tempting choice. And I feel confident that for at least some civilizations, there will be people who refuse to bite and say “OK, you guys go inhabit a tiny subset of all universes as gods; we will stay behind and occupy all remaining universes as mortals.”
If this process keeps going on for a while, you end up with a residual civilization composed overwhelmingly of people who harbor strong memes against taking extremely low-probability, high-payoff risks, even if the probability arithmetic indicates doing so.
For your proposal to work, it has to be an all-or-nothing thing that affects every member of the species, or affects a broad enough area that the people who aren’t interested have no choice but to play along because there’s no escape from the blast radius of the “might make you God, probably kills you” machine. The former is unlikely because it requires technomagic; the latter strikes me as possible only if it triggers events we could detect at long range.
I admit that your analysis is quite convincing, but will play the devil’s advocate just for fun:
1) We see a lot of cataclysmic events in our universe, the source of which are at least uncertain. It is definitely a possibility that some of them could originate from super-advanced civilizations going up in flame. (Maybe due to accidents or deliberate effort)
2) Maybe the minority that does not approve trickling down the narrow branch is even less inclined to witness the spectacular death of the elite and live on in a resource-exhausted section of the universe and therefore decides to play along.
3) Even if a small risk-averse minority of the civilization is left behind, when it reaches a certain size again, large part of it will decide again to go down the narrow path so it won’t grow significantly over time.
4) If the minority becomes so extremely conservative and risk-averse (due to selection after some iterations of 3) then it necessarily means that it has also lost its ambitions to colonize the galaxy and will just stagnate along a few star systems and will try to hide from other civilizations to avoid any possible conflicts, so we would have difficulties to detect them.
Good points. However:
(1) Most of the cataclysms we see are either fairly explicable (supernovae) or seem to occur only at remote points in spacetime, early in the evolution of the universe, when the emergence of intelligent life would have been very unlikely. Quasars and gamma ray bursts cannot plausibly be industrial accidents in my opinion, and supernovae need not be industrial accidents.
(2)Possible, but I can still imagine large civilizations of people whose utility function is weighted such that “99.9999% death plus 0.0001% superman” is inferior to “continued mortal existence.”
(3)Again possible, but there will be a selection effect over time. Eventually, the remaining people (who, you will notice, live in a universe where people who try to ascend to godhood always die) will no longer think ascending to godhood is a good idea. Maybe the ancients were right and there really is a small chance that the ascent process works and doesn’t kill you, but you have never seen it work, and you have seen your civilization nearly exterminated by the power-hungry fools who tried it the last ten times.
At what point do you decide that it’s more likely that the ancients did the math wrong and the procedure just flat out does not work?
(4)The minority might have no problems with risks that do not have a track record of killing everybody. However, you have a point: a rational civilization that expects the galaxy to be heavily populated might be well advised to hide.
(2)Possible, but I can still imagine large civilizations of people whose utility function is weighted such that “99.9999% death plus 0.0001% superman” is inferior to “continued mortal existence.”
You have to keep in mind that subjective experience will be 100% superman. The whole idea is that the MWI is true and completely convincingly demonstrated by other means as well. It is like if someone would tell you: you enter this room and all you will experience is that you leave the room with one billion dollars. I think it is a seducing prospect.
Yet another analogue: Assume that you have the choice between the following two scenarios:
1) You get replicated million times and all the copies will lead an existence in hopeless poverty
2) You continue your current existence as a single copy but in luxury
The absolute reference frame may be different but the relative difference between the two outcomes is very similar to those of the above alternative.
Possible additional motivation could be given by knowing that if you don’t do that and wait a very very long time, the cumulative risk that you experience some other civilization going superman and obliterating you will raise above a certain threshold. For single civilizations the chance of experiencing it would be negligible but for a universe filled with aspiring civilizations, the chance of experiencing at least one of them going omega could become a significant risk after a while.
Agree it is a seducing prospect. If advanced civilization means superintelligent AI with perfect rationality, I see no reason why any civilization wouldn’t make the choice. Certainly a lot of humans wouldn’t though.
Your aliens are assigning zero weight to their own death, as opposed to a negative weight. While this may be logical, I can certainly imagine a broadly rational intelligent species that doesn’t do it.
Consider the problems with doing so. Suppose that Omega offers to give a friend of yours a wonderful life if you let him zap you out of existence. A wonderful life for a friend of yours clearly has a positive weight, but I’d expect you to say “no,” because you are assigning a negative weight to death. If you assign a zero weight to an outcome involving your own death, you’d go for it, wouldn’t you?
I think a more reasonable weighting vector would say “cessation of existence has a negative value, even if I have no subjective experience of it.” It might still be worth it if the probability ratio of “superman to dead” is good enough, but I don’t think every rational being would count all the universes without them in it as having zero value.
Moreover, many rational beings might choose to instead work on the procedure that will make them into supermen, hoping to reduce the probability of an extinction event. After all, if becoming a superman with probability 0.0001% is good, how much better to become one with probability 0.1%, or 10%, or even (oh unattainable of unattainables) 1!
Finally, your additional motivation raises a question in its own right: why haven’t we encountered an Omega Civilization yet? If intelligence is common enough that an explanation for our not being able to find it is required, it is highly unlikely that any Omega Civilizations exist in our galaxy. For being an Omega Civilization to be tempting enough to justify the risks we’re talking about, I’d say that it would have to raise your civilization to the point of being a significant powerhouse on an interstellar or galactic scale. In which case it should be far easier for mundane civilizations to detect evidence of an Omega Civilization than to detect ordinary civilizations that lack the resources to do things like juggle Dyson spheres and warp the fabric of reality to their whims.
The only explanation of this is that the probability of some civilization within range of us (either in range to reach us, or to be detected by us) having gone Omega in the history of the universe is low. But if that’s true, then the odds are also low enough that I’d expect to see more dissenters from advanced civilizations trying to ascend, who then proceed to try and do things the old-fashioned way.
Hmmm, it seems that most of your arguments are in plain probability-theoretical terms: what is the expected utility assuming certain probabilities of certain outcomes. During the arguments you compute expected values.
The whole point of my example was that assuming a many world view of the universe (i.e. multiverse), using the above decision procedures is questionable at best in some situations.
In classical probability theoristic view, you won’t experience your payoff at all if you don’t win. In a MWT framework, you will experience it for sure. (Of course the rest of the world sees a high chance of your loosing, but why should that bother you?)
I definitely would not gamble my life on 1:1000000 chances, but if Omega would convince me that MWI is definitely correct and the game is set up in a way that I will experience my payoff for sure in some branches of the multiverse, then it would be quite different from a simple gamble.
I think it is a quite an interesting case where human intuition and MWI clashes, simply because it contradicts our everyday beliefs on our physical reality. I don’t say that the above would be an easy decision for me, but I don’t think you can just compute expected value to make the choice. The choice is really more about subjective values: what is more important to you: your subjective experience or saturating the Multiverse branches with your copies.
“Finally, your additional motivation raises a question in its own right: why haven’t we encountered an Omega Civilization yet?”
That one is easy: The assumption I purposefully made that going omega is a “high risk” (a misleading word, but maybe the closest) process meaning that even if some civilizations went omega, the outsiders (i.e. us) will see them simply wiped out in an overwhelming number of Everett-branches, i.e. with very high probability for us. Therefore we have to wait a huge number of civilizations going omega before we experience them having attained Omega status. Still, if we wait too long, (since the probability of experiencing it is nonzero) some of them will inevitably manage in our Everett-subtree and we will see that civ as a winner.
To make this calculation in a MWI multiverse, you still have to place a zero (or extremely small negative) value on all the branches where you die and take most or all of your species with you. You don’t experience them, so they don’t matter, right? That’s a specialized form of a general question which amounts to “does the universe go away when I’m not looking at it?”
If one can make rational decisions about a universe that doesn’t contain oneself in it (and life insurance policies, high-level decorations for valor, and the like suggest this is possible), then outcomes we aren’t aware of have to have some nonzero significance, for better or for worse.
As for “question in its own right,” I think you misunderstood what I was getting at. If advanced civilizations are probable and all or nearly all of them try to go Omega, and they’ve all (in our experience, on this worldline) failed, it suggests that the probability must be extremely low, or that the power benefits to be had from going Omega are low enough that we cannot detect them over galaxy-scale distances.
In the first case, the odds of dissenters not drinking the “Omegoid” Kool-Aid increase: the number of people who will accept a multiverse that kills them in 9 branches and makes them gods in the 10th is probably somewhat larger than the number who will accept one that kills them in 999999999 branches and makes them gods in the 10^9th. So you’d expect dissenter cultures to survive the general self-destruction of the civilization and carry on with their existence by mundane means (or trying to find a way to improve the reliability of the Omega process)
In the second case (Omega civilizations are not detectable at galactic-scale distances), I would be wary of claiming that the benefits of going Omega are obvious. In which case, again, you’ll get more dissenters.
To our perspective, this is from (2): all advanced civilizations die off in massive industrial accidents; God alone knows what they thought they were trying to accomplish.
Also, wouldn’t there still be people who chose to stay behind? Unless we’re talking about something that blows up entire solar systems, it would remain possible for members of the advanced civilization to opt out of this very tempting choice. And I feel confident that for at least some civilizations, there will be people who refuse to bite and say “OK, you guys go inhabit a tiny subset of all universes as gods; we will stay behind and occupy all remaining universes as mortals.”
If this process keeps going on for a while, you end up with a residual civilization composed overwhelmingly of people who harbor strong memes against taking extremely low-probability, high-payoff risks, even if the probability arithmetic indicates doing so.
For your proposal to work, it has to be an all-or-nothing thing that affects every member of the species, or affects a broad enough area that the people who aren’t interested have no choice but to play along because there’s no escape from the blast radius of the “might make you God, probably kills you” machine. The former is unlikely because it requires technomagic; the latter strikes me as possible only if it triggers events we could detect at long range.
I admit that your analysis is quite convincing, but will play the devil’s advocate just for fun:
1) We see a lot of cataclysmic events in our universe, the source of which are at least uncertain. It is definitely a possibility that some of them could originate from super-advanced civilizations going up in flame. (Maybe due to accidents or deliberate effort)
2) Maybe the minority that does not approve trickling down the narrow branch is even less inclined to witness the spectacular death of the elite and live on in a resource-exhausted section of the universe and therefore decides to play along.
3) Even if a small risk-averse minority of the civilization is left behind, when it reaches a certain size again, large part of it will decide again to go down the narrow path so it won’t grow significantly over time.
4) If the minority becomes so extremely conservative and risk-averse (due to selection after some iterations of 3) then it necessarily means that it has also lost its ambitions to colonize the galaxy and will just stagnate along a few star systems and will try to hide from other civilizations to avoid any possible conflicts, so we would have difficulties to detect them.
Good points. However: (1) Most of the cataclysms we see are either fairly explicable (supernovae) or seem to occur only at remote points in spacetime, early in the evolution of the universe, when the emergence of intelligent life would have been very unlikely. Quasars and gamma ray bursts cannot plausibly be industrial accidents in my opinion, and supernovae need not be industrial accidents.
(2)Possible, but I can still imagine large civilizations of people whose utility function is weighted such that “99.9999% death plus 0.0001% superman” is inferior to “continued mortal existence.”
(3)Again possible, but there will be a selection effect over time. Eventually, the remaining people (who, you will notice, live in a universe where people who try to ascend to godhood always die) will no longer think ascending to godhood is a good idea. Maybe the ancients were right and there really is a small chance that the ascent process works and doesn’t kill you, but you have never seen it work, and you have seen your civilization nearly exterminated by the power-hungry fools who tried it the last ten times.
At what point do you decide that it’s more likely that the ancients did the math wrong and the procedure just flat out does not work?
(4)The minority might have no problems with risks that do not have a track record of killing everybody. However, you have a point: a rational civilization that expects the galaxy to be heavily populated might be well advised to hide.
(2)Possible, but I can still imagine large civilizations of people whose utility function is weighted such that “99.9999% death plus 0.0001% superman” is inferior to “continued mortal existence.”
You have to keep in mind that subjective experience will be 100% superman. The whole idea is that the MWI is true and completely convincingly demonstrated by other means as well. It is like if someone would tell you: you enter this room and all you will experience is that you leave the room with one billion dollars. I think it is a seducing prospect.
Yet another analogue: Assume that you have the choice between the following two scenarios:
1) You get replicated million times and all the copies will lead an existence in hopeless poverty
2) You continue your current existence as a single copy but in luxury
The absolute reference frame may be different but the relative difference between the two outcomes is very similar to those of the above alternative.
Possible additional motivation could be given by knowing that if you don’t do that and wait a very very long time, the cumulative risk that you experience some other civilization going superman and obliterating you will raise above a certain threshold. For single civilizations the chance of experiencing it would be negligible but for a universe filled with aspiring civilizations, the chance of experiencing at least one of them going omega could become a significant risk after a while.
Agree it is a seducing prospect. If advanced civilization means superintelligent AI with perfect rationality, I see no reason why any civilization wouldn’t make the choice. Certainly a lot of humans wouldn’t though.
Your aliens are assigning zero weight to their own death, as opposed to a negative weight. While this may be logical, I can certainly imagine a broadly rational intelligent species that doesn’t do it.
Consider the problems with doing so. Suppose that Omega offers to give a friend of yours a wonderful life if you let him zap you out of existence. A wonderful life for a friend of yours clearly has a positive weight, but I’d expect you to say “no,” because you are assigning a negative weight to death. If you assign a zero weight to an outcome involving your own death, you’d go for it, wouldn’t you?
I think a more reasonable weighting vector would say “cessation of existence has a negative value, even if I have no subjective experience of it.” It might still be worth it if the probability ratio of “superman to dead” is good enough, but I don’t think every rational being would count all the universes without them in it as having zero value.
Moreover, many rational beings might choose to instead work on the procedure that will make them into supermen, hoping to reduce the probability of an extinction event. After all, if becoming a superman with probability 0.0001% is good, how much better to become one with probability 0.1%, or 10%, or even (oh unattainable of unattainables) 1!
Finally, your additional motivation raises a question in its own right: why haven’t we encountered an Omega Civilization yet? If intelligence is common enough that an explanation for our not being able to find it is required, it is highly unlikely that any Omega Civilizations exist in our galaxy. For being an Omega Civilization to be tempting enough to justify the risks we’re talking about, I’d say that it would have to raise your civilization to the point of being a significant powerhouse on an interstellar or galactic scale. In which case it should be far easier for mundane civilizations to detect evidence of an Omega Civilization than to detect ordinary civilizations that lack the resources to do things like juggle Dyson spheres and warp the fabric of reality to their whims.
The only explanation of this is that the probability of some civilization within range of us (either in range to reach us, or to be detected by us) having gone Omega in the history of the universe is low. But if that’s true, then the odds are also low enough that I’d expect to see more dissenters from advanced civilizations trying to ascend, who then proceed to try and do things the old-fashioned way.
Hmmm, it seems that most of your arguments are in plain probability-theoretical terms: what is the expected utility assuming certain probabilities of certain outcomes. During the arguments you compute expected values.
The whole point of my example was that assuming a many world view of the universe (i.e. multiverse), using the above decision procedures is questionable at best in some situations.
In classical probability theoristic view, you won’t experience your payoff at all if you don’t win. In a MWT framework, you will experience it for sure. (Of course the rest of the world sees a high chance of your loosing, but why should that bother you?)
I definitely would not gamble my life on 1:1000000 chances, but if Omega would convince me that MWI is definitely correct and the game is set up in a way that I will experience my payoff for sure in some branches of the multiverse, then it would be quite different from a simple gamble.
I think it is a quite an interesting case where human intuition and MWI clashes, simply because it contradicts our everyday beliefs on our physical reality. I don’t say that the above would be an easy decision for me, but I don’t think you can just compute expected value to make the choice. The choice is really more about subjective values: what is more important to you: your subjective experience or saturating the Multiverse branches with your copies.
“Finally, your additional motivation raises a question in its own right: why haven’t we encountered an Omega Civilization yet?”
That one is easy: The assumption I purposefully made that going omega is a “high risk” (a misleading word, but maybe the closest) process meaning that even if some civilizations went omega, the outsiders (i.e. us) will see them simply wiped out in an overwhelming number of Everett-branches, i.e. with very high probability for us. Therefore we have to wait a huge number of civilizations going omega before we experience them having attained Omega status. Still, if we wait too long, (since the probability of experiencing it is nonzero) some of them will inevitably manage in our Everett-subtree and we will see that civ as a winner.
To make this calculation in a MWI multiverse, you still have to place a zero (or extremely small negative) value on all the branches where you die and take most or all of your species with you. You don’t experience them, so they don’t matter, right? That’s a specialized form of a general question which amounts to “does the universe go away when I’m not looking at it?”
If one can make rational decisions about a universe that doesn’t contain oneself in it (and life insurance policies, high-level decorations for valor, and the like suggest this is possible), then outcomes we aren’t aware of have to have some nonzero significance, for better or for worse.
As for “question in its own right,” I think you misunderstood what I was getting at. If advanced civilizations are probable and all or nearly all of them try to go Omega, and they’ve all (in our experience, on this worldline) failed, it suggests that the probability must be extremely low, or that the power benefits to be had from going Omega are low enough that we cannot detect them over galaxy-scale distances.
In the first case, the odds of dissenters not drinking the “Omegoid” Kool-Aid increase: the number of people who will accept a multiverse that kills them in 9 branches and makes them gods in the 10th is probably somewhat larger than the number who will accept one that kills them in 999999999 branches and makes them gods in the 10^9th. So you’d expect dissenter cultures to survive the general self-destruction of the civilization and carry on with their existence by mundane means (or trying to find a way to improve the reliability of the Omega process)
In the second case (Omega civilizations are not detectable at galactic-scale distances), I would be wary of claiming that the benefits of going Omega are obvious. In which case, again, you’ll get more dissenters.