One thing that caught my eye is the presentation of “Universe is not filled with technical civilizations...” as data against the hypothesis of modern civilizations being probable.
It occurs to me that this could mean any of three things, which only one of which indicates that modern civilizations are improbable.
1) Modern civilizations are in fact as rare as they appear to be because they are unlikely to emerge. This is the interpretation used by this article.
2) Modern civilizations collapse quickly back to a premodern state, either by fighting a very destructive war, by high-probability natural disasters, by running out of critical resources, or by a cataclysmic industrial accident such as major climate change or a Gray Goo event.
This would undermine an attempt to judge the odds of modern civilizations emerging based on a small sample size. If (2) is true, the fact that we haven’t seen a modern civilization doesn’t mean it doesn’t exist; it’s more likely to mean that it didn’t last long enough to appear on our metaphorical radar. All we know with high confidence is that there haven’t been any modern civilizations on Earth before us, which places an upper bound on the likely range of probabilities for it to happen; Earth may be a late bloomer, but it’s unlikely to be such a late bloomer that three or four civilizations would have had time to emerge before we got here.
3) The apparent rarity of modern civilizations could just be a sign that we are bad at detecting them. We know that alien civilizations haven’t visited us in the historic past, that they haven’t colonized Earth before we got here, and that they haven’t beamed detectable transmissions at us, but those quite plausibly be explained by other factors. Some hypotheses come to mind for me, but I removed them for the sake of brevity; they are available if anyone’s interested.
Anyway, where I was going with all this:
I can see a lot of alternate interpretations to explain the fact that we haven’t detected evidence of modern civilizations in our galaxy, some of which would make it hard to infer anything about the likelihood of civilizations emerging from the history of our own planet. That doesn’t mean I think that considering the problem isn’t worthwhile, though.
4) There is a very easy and unavoidable way to destroy the universe (or make it inhospitable) using technology, and any technological civilization will inevitably do so at a certain pretty early point in its history. Therefore, only one technological civilization per universe ever exists, and we should not be surprised to find ourselves to be the first.
5) The Dark Lords of the Matrix are only interested in running one civilization in our particular sim.
Re 4), is this destruction supposed to violate relativity? Also, if so, why do we find ourselves so late in cosmic history? Similar anthropic considerations interfere with a non-FTL destruction mechanism like vacuumn collapse.
6) Faster than light travel is not physically possible, the other civilizations all originated far away, and the other civilizations are all composed of people who don’t like to live in generational spaceships their entire lives.
Your 6 falls under Simon’s category 3: “they exist, but we can’t detect them, and they aren’t beaming an easy to detect advertisement of their existence to places where life might arise”
3.1) Further, they use some crypto-secure or sufficiently low-power RF communication that looks like or is masked by noise. They also don’t leak much distinctive non-communicative RF (no Las Vegas).
3.1.1) They also have no interest (or ability) to create reasonably capable robots who don’t mind the boredom of interstellar travel (either alone, or in an isolated community) as their emissaries
Another possible resolution of the Fermi paradox based on the many world interpretation of QM:
Let us assume that advanced civilizations find overwhelming evidence for the many world hypothesis as the true, infallible theory of physics. Additionally, assume that there is a quantum mechanical process that has a huge payoff at a very small probability: the equivalent of a cosmic lottery, where the chances of obliteration are close to 1, the chance of winning is close to zero, but the payoff is HUGE. It is like going into a room, where you win a billion dollar with p=1:1000000 and die a sudden, painless death at p= 999999:1000000. Still, for the many world hypothesis is true, you will experience the winning for sure.
Now imagine that at some point of its existence every very advanced civilization faces the decision to make the leap of face in the many world interpretation: start the machine that obliterates them in almost every branches of the Everett-multiverse, while letting them live on in a few branches with a huge amount of increased resources (energy/ computronium/ whatever) Since they know that their only subjective experience will be of getting the payoff at a negligible risk, they will choose the path of trickling down in some of the much narrower Everett-branches.
However, it would mean for any outsider civilizations are that they simply vanish from their branch of the Universe at a very high probability. Since every advanced civilization would be faced with the above extremely seducing way of gaining cheap resources, the probability that two of them will share the same universe will get infinitesimally small.
To our perspective, this is from (2): all advanced civilizations die off in massive industrial accidents; God alone knows what they thought they were trying to accomplish.
Also, wouldn’t there still be people who chose to stay behind? Unless we’re talking about something that blows up entire solar systems, it would remain possible for members of the advanced civilization to opt out of this very tempting choice. And I feel confident that for at least some civilizations, there will be people who refuse to bite and say “OK, you guys go inhabit a tiny subset of all universes as gods; we will stay behind and occupy all remaining universes as mortals.”
If this process keeps going on for a while, you end up with a residual civilization composed overwhelmingly of people who harbor strong memes against taking extremely low-probability, high-payoff risks, even if the probability arithmetic indicates doing so.
For your proposal to work, it has to be an all-or-nothing thing that affects every member of the species, or affects a broad enough area that the people who aren’t interested have no choice but to play along because there’s no escape from the blast radius of the “might make you God, probably kills you” machine. The former is unlikely because it requires technomagic; the latter strikes me as possible only if it triggers events we could detect at long range.
I admit that your analysis is quite convincing, but will play the devil’s advocate just for fun:
1) We see a lot of cataclysmic events in our universe, the source of which are at least uncertain. It is definitely a possibility that some of them could originate from super-advanced civilizations going up in flame. (Maybe due to accidents or deliberate effort)
2) Maybe the minority that does not approve trickling down the narrow branch is even less inclined to witness the spectacular death of the elite and live on in a resource-exhausted section of the universe and therefore decides to play along.
3) Even if a small risk-averse minority of the civilization is left behind, when it reaches a certain size again, large part of it will decide again to go down the narrow path so it won’t grow significantly over time.
4) If the minority becomes so extremely conservative and risk-averse (due to selection after some iterations of 3) then it necessarily means that it has also lost its ambitions to colonize the galaxy and will just stagnate along a few star systems and will try to hide from other civilizations to avoid any possible conflicts, so we would have difficulties to detect them.
Good points. However:
(1) Most of the cataclysms we see are either fairly explicable (supernovae) or seem to occur only at remote points in spacetime, early in the evolution of the universe, when the emergence of intelligent life would have been very unlikely. Quasars and gamma ray bursts cannot plausibly be industrial accidents in my opinion, and supernovae need not be industrial accidents.
(2)Possible, but I can still imagine large civilizations of people whose utility function is weighted such that “99.9999% death plus 0.0001% superman” is inferior to “continued mortal existence.”
(3)Again possible, but there will be a selection effect over time. Eventually, the remaining people (who, you will notice, live in a universe where people who try to ascend to godhood always die) will no longer think ascending to godhood is a good idea. Maybe the ancients were right and there really is a small chance that the ascent process works and doesn’t kill you, but you have never seen it work, and you have seen your civilization nearly exterminated by the power-hungry fools who tried it the last ten times.
At what point do you decide that it’s more likely that the ancients did the math wrong and the procedure just flat out does not work?
(4)The minority might have no problems with risks that do not have a track record of killing everybody. However, you have a point: a rational civilization that expects the galaxy to be heavily populated might be well advised to hide.
(2)Possible, but I can still imagine large civilizations of people whose utility function is weighted such that “99.9999% death plus 0.0001% superman” is inferior to “continued mortal existence.”
You have to keep in mind that subjective experience will be 100% superman. The whole idea is that the MWI is true and completely convincingly demonstrated by other means as well. It is like if someone would tell you: you enter this room and all you will experience is that you leave the room with one billion dollars. I think it is a seducing prospect.
Yet another analogue: Assume that you have the choice between the following two scenarios:
1) You get replicated million times and all the copies will lead an existence in hopeless poverty
2) You continue your current existence as a single copy but in luxury
The absolute reference frame may be different but the relative difference between the two outcomes is very similar to those of the above alternative.
Possible additional motivation could be given by knowing that if you don’t do that and wait a very very long time, the cumulative risk that you experience some other civilization going superman and obliterating you will raise above a certain threshold. For single civilizations the chance of experiencing it would be negligible but for a universe filled with aspiring civilizations, the chance of experiencing at least one of them going omega could become a significant risk after a while.
Agree it is a seducing prospect. If advanced civilization means superintelligent AI with perfect rationality, I see no reason why any civilization wouldn’t make the choice. Certainly a lot of humans wouldn’t though.
Your aliens are assigning zero weight to their own death, as opposed to a negative weight. While this may be logical, I can certainly imagine a broadly rational intelligent species that doesn’t do it.
Consider the problems with doing so. Suppose that Omega offers to give a friend of yours a wonderful life if you let him zap you out of existence. A wonderful life for a friend of yours clearly has a positive weight, but I’d expect you to say “no,” because you are assigning a negative weight to death. If you assign a zero weight to an outcome involving your own death, you’d go for it, wouldn’t you?
I think a more reasonable weighting vector would say “cessation of existence has a negative value, even if I have no subjective experience of it.” It might still be worth it if the probability ratio of “superman to dead” is good enough, but I don’t think every rational being would count all the universes without them in it as having zero value.
Moreover, many rational beings might choose to instead work on the procedure that will make them into supermen, hoping to reduce the probability of an extinction event. After all, if becoming a superman with probability 0.0001% is good, how much better to become one with probability 0.1%, or 10%, or even (oh unattainable of unattainables) 1!
Finally, your additional motivation raises a question in its own right: why haven’t we encountered an Omega Civilization yet? If intelligence is common enough that an explanation for our not being able to find it is required, it is highly unlikely that any Omega Civilizations exist in our galaxy. For being an Omega Civilization to be tempting enough to justify the risks we’re talking about, I’d say that it would have to raise your civilization to the point of being a significant powerhouse on an interstellar or galactic scale. In which case it should be far easier for mundane civilizations to detect evidence of an Omega Civilization than to detect ordinary civilizations that lack the resources to do things like juggle Dyson spheres and warp the fabric of reality to their whims.
The only explanation of this is that the probability of some civilization within range of us (either in range to reach us, or to be detected by us) having gone Omega in the history of the universe is low. But if that’s true, then the odds are also low enough that I’d expect to see more dissenters from advanced civilizations trying to ascend, who then proceed to try and do things the old-fashioned way.
Hmmm, it seems that most of your arguments are in plain probability-theoretical terms: what is the expected utility assuming certain probabilities of certain outcomes. During the arguments you compute expected values.
The whole point of my example was that assuming a many world view of the universe (i.e. multiverse), using the above decision procedures is questionable at best in some situations.
In classical probability theoristic view, you won’t experience your payoff at all if you don’t win. In a MWT framework, you will experience it for sure. (Of course the rest of the world sees a high chance of your loosing, but why should that bother you?)
I definitely would not gamble my life on 1:1000000 chances, but if Omega would convince me that MWI is definitely correct and the game is set up in a way that I will experience my payoff for sure in some branches of the multiverse, then it would be quite different from a simple gamble.
I think it is a quite an interesting case where human intuition and MWI clashes, simply because it contradicts our everyday beliefs on our physical reality. I don’t say that the above would be an easy decision for me, but I don’t think you can just compute expected value to make the choice. The choice is really more about subjective values: what is more important to you: your subjective experience or saturating the Multiverse branches with your copies.
“Finally, your additional motivation raises a question in its own right: why haven’t we encountered an Omega Civilization yet?”
That one is easy: The assumption I purposefully made that going omega is a “high risk” (a misleading word, but maybe the closest) process meaning that even if some civilizations went omega, the outsiders (i.e. us) will see them simply wiped out in an overwhelming number of Everett-branches, i.e. with very high probability for us. Therefore we have to wait a huge number of civilizations going omega before we experience them having attained Omega status. Still, if we wait too long, (since the probability of experiencing it is nonzero) some of them will inevitably manage in our Everett-subtree and we will see that civ as a winner.
To make this calculation in a MWI multiverse, you still have to place a zero (or extremely small negative) value on all the branches where you die and take most or all of your species with you. You don’t experience them, so they don’t matter, right? That’s a specialized form of a general question which amounts to “does the universe go away when I’m not looking at it?”
If one can make rational decisions about a universe that doesn’t contain oneself in it (and life insurance policies, high-level decorations for valor, and the like suggest this is possible), then outcomes we aren’t aware of have to have some nonzero significance, for better or for worse.
As for “question in its own right,” I think you misunderstood what I was getting at. If advanced civilizations are probable and all or nearly all of them try to go Omega, and they’ve all (in our experience, on this worldline) failed, it suggests that the probability must be extremely low, or that the power benefits to be had from going Omega are low enough that we cannot detect them over galaxy-scale distances.
In the first case, the odds of dissenters not drinking the “Omegoid” Kool-Aid increase: the number of people who will accept a multiverse that kills them in 9 branches and makes them gods in the 10th is probably somewhat larger than the number who will accept one that kills them in 999999999 branches and makes them gods in the 10^9th. So you’d expect dissenter cultures to survive the general self-destruction of the civilization and carry on with their existence by mundane means (or trying to find a way to improve the reliability of the Omega process)
In the second case (Omega civilizations are not detectable at galactic-scale distances), I would be wary of claiming that the benefits of going Omega are obvious. In which case, again, you’ll get more dissenters.
There’s also some assumption here that civilisations either collpase or conquer the galaxy, but that ignores another possibility—that civilisations might quickly reach a plateau technologically and in terms of size.
The reasons this could be the case is that civilisations must always solve their problems of growth and sustainability long before they have the technology to move beyond their home planet, and once they have done so, there ceases to be any imperative toward off-world expansion, and without ever increasing economies of scale, technological developments taper off.
But, Calvin, P(intelligent life contacting us | intelligent life exists) >= P(intelligent life contacting us | intelligent life does not exist) = 0, so the fact that no other intelligent life has contacted us can only be evidence against its existence.
(The problem with formally bringing out Bayes’ law is that, by the time you’ve gone through and stated everything “properly”, your toboggan will have already crashed into the brier patch.)
I think the joke hinges on equivocation of the word “intelligent”. Taboo “intelligent”, use “sapient” and “clever” for the two meanings, and you get: “Sometimes I think the surest sign that clever life exists elsewhere in the universe is that no sapient life has tried to contact us.” Or, put more accurately, “the fact that no sapient life has contacted us is evidence that, if sapient life exists elsewhere in the universe, it’s probably also clever”.
By law of conservation of evidence if detecting alien civilization makes them more likely, not detecting them after sustained effort makes them less likely, right?
Counterevidence for 2 - there are extremely few sustained reversals of either life or civilization. Toba bottleneck seems like the most likely near-reversal, and it happened before modern civilization. You would need to postulate extremely high likelihood of collapse if you suggest that emergence is very frequent, and still civilizations aren’t around. If only 90% of civilizations collapse (what seems vastly higher proportion than we have any reason to believe), then if civilizations are likely, they should still be plentiful. Hypothesis 2 would only work if emergence is very likely, and then fast extinction is nearly inevitable. After civilization starts spreading widely across star systems extinction seems extremely unlikely.
Counterevidence for 3 - some models suggest that advanced civilizations would have spread extremely quickly across galaxy by geological timescales. That leaves us with:
Advanced civilizations are numerous but were all created extremely recently, last 0.1% of galaxy’s lifetime or so (extremely unlikely to the point that we can ignore it)
We suck at detection so much that we cannot even detect galaxy-wide civilization (seems unlikely, do you postulate that?)
These models are really bad, and advanced civilizations tend to be contained to spread extremely slowly (more plausible, these models have no empirical support)
3 is false and there are few or no other advanced civilization in the galaxy (what I find most likely), either by not arising in the first place or extinction.
My rating of probabilities is 1 >> 3 >> 2. And yes, I’m aware existential risks are widely believed here—I don’t share this belief at all.
Countercounterevidence for 3: what are the assumptions made by those models of interstellar colonization?
Do they assume fusion power? We don’t know if industrial fusion power works economically enough to power starships. Likewise for nanotech-type von Neumann machines and other tools of space colonization.
The adjustable parameters in any model for interstellar colonization are defined by the limits of capability for a technological civilization. And we don’t actually know the limits, because we haven’t gotten close enough to those limits to probe them yet. If the future looks like the more optimistic hard science fiction authors suggest, then the galaxy should be full of intelligence and we should be able to spot the drive flares of Orion-powered ships flitting around, or the construction of Dyson spheres by the more ambitious species. We should be able to see something, at any rate.
But if the future doesn’t look like that, if there’s no way to build cost-effective fusion reactors and the only really worthwhile sustainable power source is solar, if there are hard limits on what nanotech is capable of that limit its industrial applications, and so on… the barrier to entry for a planetary civilization hoping to go galactic may be so high that even with thousands of intelligent species to make the attempt, none of them make it.
This ties back into the hypotheses I left out of my post for the sake of brevity; I’m now considering throwing them in to explain my reasoning a little better. But I’m still not sure I should do it without invitation, because they are on the long side.
Alternate explanations for rarity of intelligence:
3a) Interstellar travel is prohibitively difficult. The fact that the galaxy isn’t obviously awash in intelligence is a sign that FTL travel is impossible or extremely unfeasible.
Barring technology indistinguishable from magic, building any kind of STL colonizer would involve a great investment of resources for a questionable return; intelligent beings might just look at the numbers and decide not to bother. At most, the typical modern civilization might send probes out to the nearest stellar neighbors. If the cost of sending a ton of cargo to Alpha Centauri is say, 0.0001% of your civilization’s annual GDP, you’re not likely to see anyone sending million-ton colony ships to Alpha Centauri. In which case intelligent life might be relatively common in the galaxy without any of it coming here; even the more ambitious cultures that actually did bother to make the trip to the nearest stars would tend to peter out over time rather than going through exponential expansion.
3b) Interstellar colonization is prohibitively difficult. If sending an STL colony expedition to another star is hard, sending one with a large enough logistics base to terraform a planet will be exponentially harder.
There are something on the order of 1000 stars within 50 to 60 light years of us. Assuming more or less uniform stellar densities, if the probability of a habitable planet appearing around any given star is much less than 0.1%, it’s likely that such planets will remain permanently out of reach for a sublight colony ship. In that case, spreading one’s civilization throughout the galaxy depends on being able to terraform planets across interstellar distances before setting up a large population on those worlds. Even if travel across short (~10 ly) interstellar distances is not prohibitively difficult, there might still be little or no incentive to colonize the available worlds beyond one’s own star system. After all, if you’re going to live in a climate-controlled bunker on an uninhabitable rock where you can’t step outside without being freeze-dried or boiled alive, you might as well do it somewhere closer to home.
NOTE: This amounts to “super-difficult life,” but it does not require that there are few intelligent species in the galaxy. If the emergence of life is (for lack of a better term) super-duper-difficult, or if most planets are inhospitable enough to make it impossible, then we could have many thousands of intelligent species in the galaxy without any of them being likely to reach each other.
3c) Interstellar colonization might be “psychologically” difficult. For instance, what if the next logical step in the evolution of modern civilization is an AI singularity, possibly coupled with some kind of uploading of consciousness into machines? Either way, our descendants of 200 years from now might well be, to our eyes, a civilization of robots. To a society of strong AIs, interstellar colonization is liable to look a little different. Traveling to even the nearest stars, you will be cut off from the rest of your civilization by a transmission gap on the order of 10^20 cycles just because of the lightspeed limit.*
That might sound like an even worse idea to them than spending a long lifetime in cryogenic storage and having a twenty year round trip communication cycle with Earth does to us. In which case they’re likely to stay at home and come up with elaborate social activities or simulations to spend their time, because interstellar colonization is just too unpleasant to bear considering.
*Assuming roughly 1 THz computing, for relatively near stellar neighbors. This estimate is probably too low, but I need some numbers and I am nowhere near an expert on artificial intelligence or the probable limits of computer technology.
For a machine-phase civilization, the only one of these that seems plausible is 3c, but I can’t think of any reason why no one in a given civilization would want to leave, and assuming growth of any kind, resource pressure alone will eventually drive expansion. If the need for civilization is so psychologically strong, copies can be shipped and revived only after specialized systems have built enough infrastructure to support them.
It seems far more likely to me, given the emergence of multiple civilizations in a galaxy, that some technical advance inevitably destroys them. Nanomedicine malfunction or singleton seem like the best bets to me just now, which would suggest that the best defenses are spreading out and technical systems’ heterogeneity.
A machine-phase civilization might still find (3a) or (3b) an issue depending on whether nanotech pans out. We think it will, but we don’t really know, and a lot of technologies turn out to be profoundly less capable than the optimists expect them to be in their infancy. Science fiction authors in the ’40s and ’50s were predicting that atomic power sources would be strongly miniaturized (amusingly, more so than computing devices); that never happened and it looks like the minimum size for a reasonably safe nuclear reactor really is a large piece of industrial machinery.
If nanotech does what its greatest enthusiasts expect, then the minimum size of industrial base you need to create a new technological civilization in a completely undeveloped solar system is low (I don’t know, probably in the 10-1000 ton range), in which case the payload for your starship is low enough that you might be able to convince people to help you build and launch it. Extremely capable nanotech also helps on the launch end by making the task of organizing the industrial resources to build the ship easier.
But if nanotech doesn’t operate at that level, if you actually need to carry machine tools and stockpiles of exotic materials unlikely to be found in asteroid belts and so on… things could be expensive enough that at any point in a civilization’s history it can think of something more interesting to do with the resources required to build an interstellar colony ship. Again, if the construction cost of the ship is an order of magnitude greater than the gross planetary product, it won’t get built, especially if very few people actually want to ride it.
Also, could you define “singleton” for me, please?
Sorry for taking so long on this; I forgot to check back using a browser that can see red envelopes (I usually read lesswrong with elinks).
I think if nanotech does what its greatest enthusiasts expect, the minimum size of the industrial base will be in the 1-10 ton range. However, if we’re assuming that level of nanotech, anyone who wants will be able to launch their own expedition, personally, without any particular help other than downloading GNU/Spaceship. If nanotech works as advertised, it turns construction into a programming project.
Also, if we limit ourselves to predictions made in the 50s with no assumptions of new science, I think we’ll find that the predictions are reasonable, technically, and the main reason we don’t have nuclear cars and basement reactors now involve politics. Molecular manufacturing probably cannot be contained this way, since it doesn’t require a limited resource that’s easy to detect from a distance.
Others have defined singleton, so I assume you’re happy with that. :)
Re: Nanotech
That’s exactly my point: if nanotech performs as advertised by its starriest-eyed advocates, then interstellar colonization can be done with small payloads and energy is cheap enough that they can be launched easily. That is a very big “if,” and not one we can shrug off or assume in advance as the underlying principle of all our models.
What if nanotech turns out to have many of the same limits as its closest natural analogue, biological cells? Biotech is great for doing chemistry, but not so great for assembling industrial machinery (like large solar arrays) in a hostile environment.
As for the “nuclear cars and basement reactors” being out of the picture because of politics and not engineering, that’s… really quite impressively not true, I think. Fission reactors create neutrons that slip through most materials like a ghost and can riddle you with radiation unless you stand far away or have excellent shielding. Radioactive thermal generators require synthetic or refined isotopes that are expensive by nature because they have to be [i]made[/i], atom by atom… and they’re still quite radioactive if they’re hot enough to be a useful power source.
The real problem isn’t the atomic power source itself, it’s the shielding you need to keep it from giving you cancer. There’s no easy way to miniaturize that, because neutron capture cross-sections play no favorites and can’t be tinkered with.
This stuff is not a toy, and there are very good reasons of engineering why it never made the leap from industrial equipment to household use, except in the smallest and most trivial scales (such as americium in smoke detectors). It’s not just about politics.
‘singleton’ as I’ve seen it used seems to be one possible Singularity in which a single AI absorbs everyone and everything into itself in a single colossal entity. We’d probably consider it a Bad Ending.
A singleton is a more general concept than intelligence explosion. The specific case of a benevolent AGI singleton aka FAI is not a bad ending. Think of it as Nature 2.0, supervised universe, not as a dictator.
If civilizations achieve a certain sophistication, they necessarily decipher the purpose of the universe and once they understand its true meaning and that they are just a superfluous side-effect, they simply commit suicide. Here is a blog entry of mine elaborating on this hypothesis:
One thing that caught my eye is the presentation of “Universe is not filled with technical civilizations...” as data against the hypothesis of modern civilizations being probable.
It occurs to me that this could mean any of three things, which only one of which indicates that modern civilizations are improbable.
1) Modern civilizations are in fact as rare as they appear to be because they are unlikely to emerge. This is the interpretation used by this article.
2) Modern civilizations collapse quickly back to a premodern state, either by fighting a very destructive war, by high-probability natural disasters, by running out of critical resources, or by a cataclysmic industrial accident such as major climate change or a Gray Goo event.
This would undermine an attempt to judge the odds of modern civilizations emerging based on a small sample size. If (2) is true, the fact that we haven’t seen a modern civilization doesn’t mean it doesn’t exist; it’s more likely to mean that it didn’t last long enough to appear on our metaphorical radar. All we know with high confidence is that there haven’t been any modern civilizations on Earth before us, which places an upper bound on the likely range of probabilities for it to happen; Earth may be a late bloomer, but it’s unlikely to be such a late bloomer that three or four civilizations would have had time to emerge before we got here.
3) The apparent rarity of modern civilizations could just be a sign that we are bad at detecting them. We know that alien civilizations haven’t visited us in the historic past, that they haven’t colonized Earth before we got here, and that they haven’t beamed detectable transmissions at us, but those quite plausibly be explained by other factors. Some hypotheses come to mind for me, but I removed them for the sake of brevity; they are available if anyone’s interested.
Anyway, where I was going with all this: I can see a lot of alternate interpretations to explain the fact that we haven’t detected evidence of modern civilizations in our galaxy, some of which would make it hard to infer anything about the likelihood of civilizations emerging from the history of our own planet. That doesn’t mean I think that considering the problem isn’t worthwhile, though.
4) There is a very easy and unavoidable way to destroy the universe (or make it inhospitable) using technology, and any technological civilization will inevitably do so at a certain pretty early point in its history. Therefore, only one technological civilization per universe ever exists, and we should not be surprised to find ourselves to be the first.
5) The Dark Lords of the Matrix are only interested in running one civilization in our particular sim.
We can still be surprised that we arrived in our universe so late.
Re 4), is this destruction supposed to violate relativity? Also, if so, why do we find ourselves so late in cosmic history? Similar anthropic considerations interfere with a non-FTL destruction mechanism like vacuumn collapse.
6) Faster than light travel is not physically possible, the other civilizations all originated far away, and the other civilizations are all composed of people who don’t like to live in generational spaceships their entire lives.
Your 6 falls under Simon’s category 3: “they exist, but we can’t detect them, and they aren’t beaming an easy to detect advertisement of their existence to places where life might arise”
3.1) Further, they use some crypto-secure or sufficiently low-power RF communication that looks like or is masked by noise. They also don’t leak much distinctive non-communicative RF (no Las Vegas).
3.1.1) They also have no interest (or ability) to create reasonably capable robots who don’t mind the boredom of interstellar travel (either alone, or in an isolated community) as their emissaries
This is my hypothesis (3c), with an implicit overlay of (3a).
Generation spaceships? No joke...
Another possible resolution of the Fermi paradox based on the many world interpretation of QM:
Let us assume that advanced civilizations find overwhelming evidence for the many world hypothesis as the true, infallible theory of physics. Additionally, assume that there is a quantum mechanical process that has a huge payoff at a very small probability: the equivalent of a cosmic lottery, where the chances of obliteration are close to 1, the chance of winning is close to zero, but the payoff is HUGE. It is like going into a room, where you win a billion dollar with p=1:1000000 and die a sudden, painless death at p= 999999:1000000. Still, for the many world hypothesis is true, you will experience the winning for sure.
Now imagine that at some point of its existence every very advanced civilization faces the decision to make the leap of face in the many world interpretation: start the machine that obliterates them in almost every branches of the Everett-multiverse, while letting them live on in a few branches with a huge amount of increased resources (energy/ computronium/ whatever) Since they know that their only subjective experience will be of getting the payoff at a negligible risk, they will choose the path of trickling down in some of the much narrower Everett-branches.
However, it would mean for any outsider civilizations are that they simply vanish from their branch of the Universe at a very high probability. Since every advanced civilization would be faced with the above extremely seducing way of gaining cheap resources, the probability that two of them will share the same universe will get infinitesimally small.
To our perspective, this is from (2): all advanced civilizations die off in massive industrial accidents; God alone knows what they thought they were trying to accomplish.
Also, wouldn’t there still be people who chose to stay behind? Unless we’re talking about something that blows up entire solar systems, it would remain possible for members of the advanced civilization to opt out of this very tempting choice. And I feel confident that for at least some civilizations, there will be people who refuse to bite and say “OK, you guys go inhabit a tiny subset of all universes as gods; we will stay behind and occupy all remaining universes as mortals.”
If this process keeps going on for a while, you end up with a residual civilization composed overwhelmingly of people who harbor strong memes against taking extremely low-probability, high-payoff risks, even if the probability arithmetic indicates doing so.
For your proposal to work, it has to be an all-or-nothing thing that affects every member of the species, or affects a broad enough area that the people who aren’t interested have no choice but to play along because there’s no escape from the blast radius of the “might make you God, probably kills you” machine. The former is unlikely because it requires technomagic; the latter strikes me as possible only if it triggers events we could detect at long range.
I admit that your analysis is quite convincing, but will play the devil’s advocate just for fun:
1) We see a lot of cataclysmic events in our universe, the source of which are at least uncertain. It is definitely a possibility that some of them could originate from super-advanced civilizations going up in flame. (Maybe due to accidents or deliberate effort)
2) Maybe the minority that does not approve trickling down the narrow branch is even less inclined to witness the spectacular death of the elite and live on in a resource-exhausted section of the universe and therefore decides to play along.
3) Even if a small risk-averse minority of the civilization is left behind, when it reaches a certain size again, large part of it will decide again to go down the narrow path so it won’t grow significantly over time.
4) If the minority becomes so extremely conservative and risk-averse (due to selection after some iterations of 3) then it necessarily means that it has also lost its ambitions to colonize the galaxy and will just stagnate along a few star systems and will try to hide from other civilizations to avoid any possible conflicts, so we would have difficulties to detect them.
Good points. However: (1) Most of the cataclysms we see are either fairly explicable (supernovae) or seem to occur only at remote points in spacetime, early in the evolution of the universe, when the emergence of intelligent life would have been very unlikely. Quasars and gamma ray bursts cannot plausibly be industrial accidents in my opinion, and supernovae need not be industrial accidents.
(2)Possible, but I can still imagine large civilizations of people whose utility function is weighted such that “99.9999% death plus 0.0001% superman” is inferior to “continued mortal existence.”
(3)Again possible, but there will be a selection effect over time. Eventually, the remaining people (who, you will notice, live in a universe where people who try to ascend to godhood always die) will no longer think ascending to godhood is a good idea. Maybe the ancients were right and there really is a small chance that the ascent process works and doesn’t kill you, but you have never seen it work, and you have seen your civilization nearly exterminated by the power-hungry fools who tried it the last ten times.
At what point do you decide that it’s more likely that the ancients did the math wrong and the procedure just flat out does not work?
(4)The minority might have no problems with risks that do not have a track record of killing everybody. However, you have a point: a rational civilization that expects the galaxy to be heavily populated might be well advised to hide.
(2)Possible, but I can still imagine large civilizations of people whose utility function is weighted such that “99.9999% death plus 0.0001% superman” is inferior to “continued mortal existence.”
You have to keep in mind that subjective experience will be 100% superman. The whole idea is that the MWI is true and completely convincingly demonstrated by other means as well. It is like if someone would tell you: you enter this room and all you will experience is that you leave the room with one billion dollars. I think it is a seducing prospect.
Yet another analogue: Assume that you have the choice between the following two scenarios:
1) You get replicated million times and all the copies will lead an existence in hopeless poverty
2) You continue your current existence as a single copy but in luxury
The absolute reference frame may be different but the relative difference between the two outcomes is very similar to those of the above alternative.
Possible additional motivation could be given by knowing that if you don’t do that and wait a very very long time, the cumulative risk that you experience some other civilization going superman and obliterating you will raise above a certain threshold. For single civilizations the chance of experiencing it would be negligible but for a universe filled with aspiring civilizations, the chance of experiencing at least one of them going omega could become a significant risk after a while.
Agree it is a seducing prospect. If advanced civilization means superintelligent AI with perfect rationality, I see no reason why any civilization wouldn’t make the choice. Certainly a lot of humans wouldn’t though.
Your aliens are assigning zero weight to their own death, as opposed to a negative weight. While this may be logical, I can certainly imagine a broadly rational intelligent species that doesn’t do it.
Consider the problems with doing so. Suppose that Omega offers to give a friend of yours a wonderful life if you let him zap you out of existence. A wonderful life for a friend of yours clearly has a positive weight, but I’d expect you to say “no,” because you are assigning a negative weight to death. If you assign a zero weight to an outcome involving your own death, you’d go for it, wouldn’t you?
I think a more reasonable weighting vector would say “cessation of existence has a negative value, even if I have no subjective experience of it.” It might still be worth it if the probability ratio of “superman to dead” is good enough, but I don’t think every rational being would count all the universes without them in it as having zero value.
Moreover, many rational beings might choose to instead work on the procedure that will make them into supermen, hoping to reduce the probability of an extinction event. After all, if becoming a superman with probability 0.0001% is good, how much better to become one with probability 0.1%, or 10%, or even (oh unattainable of unattainables) 1!
Finally, your additional motivation raises a question in its own right: why haven’t we encountered an Omega Civilization yet? If intelligence is common enough that an explanation for our not being able to find it is required, it is highly unlikely that any Omega Civilizations exist in our galaxy. For being an Omega Civilization to be tempting enough to justify the risks we’re talking about, I’d say that it would have to raise your civilization to the point of being a significant powerhouse on an interstellar or galactic scale. In which case it should be far easier for mundane civilizations to detect evidence of an Omega Civilization than to detect ordinary civilizations that lack the resources to do things like juggle Dyson spheres and warp the fabric of reality to their whims.
The only explanation of this is that the probability of some civilization within range of us (either in range to reach us, or to be detected by us) having gone Omega in the history of the universe is low. But if that’s true, then the odds are also low enough that I’d expect to see more dissenters from advanced civilizations trying to ascend, who then proceed to try and do things the old-fashioned way.
Hmmm, it seems that most of your arguments are in plain probability-theoretical terms: what is the expected utility assuming certain probabilities of certain outcomes. During the arguments you compute expected values.
The whole point of my example was that assuming a many world view of the universe (i.e. multiverse), using the above decision procedures is questionable at best in some situations.
In classical probability theoristic view, you won’t experience your payoff at all if you don’t win. In a MWT framework, you will experience it for sure. (Of course the rest of the world sees a high chance of your loosing, but why should that bother you?)
I definitely would not gamble my life on 1:1000000 chances, but if Omega would convince me that MWI is definitely correct and the game is set up in a way that I will experience my payoff for sure in some branches of the multiverse, then it would be quite different from a simple gamble.
I think it is a quite an interesting case where human intuition and MWI clashes, simply because it contradicts our everyday beliefs on our physical reality. I don’t say that the above would be an easy decision for me, but I don’t think you can just compute expected value to make the choice. The choice is really more about subjective values: what is more important to you: your subjective experience or saturating the Multiverse branches with your copies.
“Finally, your additional motivation raises a question in its own right: why haven’t we encountered an Omega Civilization yet?”
That one is easy: The assumption I purposefully made that going omega is a “high risk” (a misleading word, but maybe the closest) process meaning that even if some civilizations went omega, the outsiders (i.e. us) will see them simply wiped out in an overwhelming number of Everett-branches, i.e. with very high probability for us. Therefore we have to wait a huge number of civilizations going omega before we experience them having attained Omega status. Still, if we wait too long, (since the probability of experiencing it is nonzero) some of them will inevitably manage in our Everett-subtree and we will see that civ as a winner.
To make this calculation in a MWI multiverse, you still have to place a zero (or extremely small negative) value on all the branches where you die and take most or all of your species with you. You don’t experience them, so they don’t matter, right? That’s a specialized form of a general question which amounts to “does the universe go away when I’m not looking at it?”
If one can make rational decisions about a universe that doesn’t contain oneself in it (and life insurance policies, high-level decorations for valor, and the like suggest this is possible), then outcomes we aren’t aware of have to have some nonzero significance, for better or for worse.
As for “question in its own right,” I think you misunderstood what I was getting at. If advanced civilizations are probable and all or nearly all of them try to go Omega, and they’ve all (in our experience, on this worldline) failed, it suggests that the probability must be extremely low, or that the power benefits to be had from going Omega are low enough that we cannot detect them over galaxy-scale distances.
In the first case, the odds of dissenters not drinking the “Omegoid” Kool-Aid increase: the number of people who will accept a multiverse that kills them in 9 branches and makes them gods in the 10th is probably somewhat larger than the number who will accept one that kills them in 999999999 branches and makes them gods in the 10^9th. So you’d expect dissenter cultures to survive the general self-destruction of the civilization and carry on with their existence by mundane means (or trying to find a way to improve the reliability of the Omega process)
In the second case (Omega civilizations are not detectable at galactic-scale distances), I would be wary of claiming that the benefits of going Omega are obvious. In which case, again, you’ll get more dissenters.
There’s also some assumption here that civilisations either collpase or conquer the galaxy, but that ignores another possibility—that civilisations might quickly reach a plateau technologically and in terms of size.
The reasons this could be the case is that civilisations must always solve their problems of growth and sustainability long before they have the technology to move beyond their home planet, and once they have done so, there ceases to be any imperative toward off-world expansion, and without ever increasing economies of scale, technological developments taper off.
“Sometimes I think the surest sign that intelligent life exists elsewhere in the universe is that none of it has tried to contact us.”
But, Calvin, P(intelligent life contacting us | intelligent life exists) >= P(intelligent life contacting us | intelligent life does not exist) = 0, so the fact that no other intelligent life has contacted us can only be evidence against its existence.
(The problem with formally bringing out Bayes’ law is that, by the time you’ve gone through and stated everything “properly”, your toboggan will have already crashed into the brier patch.)
I think the joke hinges on equivocation of the word “intelligent”. Taboo “intelligent”, use “sapient” and “clever” for the two meanings, and you get: “Sometimes I think the surest sign that clever life exists elsewhere in the universe is that no sapient life has tried to contact us.” Or, put more accurately, “the fact that no sapient life has contacted us is evidence that, if sapient life exists elsewhere in the universe, it’s probably also clever”.
By law of conservation of evidence if detecting alien civilization makes them more likely, not detecting them after sustained effort makes them less likely, right?
Counterevidence for 2 - there are extremely few sustained reversals of either life or civilization. Toba bottleneck seems like the most likely near-reversal, and it happened before modern civilization. You would need to postulate extremely high likelihood of collapse if you suggest that emergence is very frequent, and still civilizations aren’t around. If only 90% of civilizations collapse (what seems vastly higher proportion than we have any reason to believe), then if civilizations are likely, they should still be plentiful. Hypothesis 2 would only work if emergence is very likely, and then fast extinction is nearly inevitable. After civilization starts spreading widely across star systems extinction seems extremely unlikely.
Counterevidence for 3 - some models suggest that advanced civilizations would have spread extremely quickly across galaxy by geological timescales. That leaves us with:
Advanced civilizations are numerous but were all created extremely recently, last 0.1% of galaxy’s lifetime or so (extremely unlikely to the point that we can ignore it)
We suck at detection so much that we cannot even detect galaxy-wide civilization (seems unlikely, do you postulate that?)
These models are really bad, and advanced civilizations tend to be contained to spread extremely slowly (more plausible, these models have no empirical support)
3 is false and there are few or no other advanced civilization in the galaxy (what I find most likely), either by not arising in the first place or extinction.
My rating of probabilities is 1 >> 3 >> 2. And yes, I’m aware existential risks are widely believed here—I don’t share this belief at all.
Countercounterevidence for 3: what are the assumptions made by those models of interstellar colonization?
Do they assume fusion power? We don’t know if industrial fusion power works economically enough to power starships. Likewise for nanotech-type von Neumann machines and other tools of space colonization.
The adjustable parameters in any model for interstellar colonization are defined by the limits of capability for a technological civilization. And we don’t actually know the limits, because we haven’t gotten close enough to those limits to probe them yet. If the future looks like the more optimistic hard science fiction authors suggest, then the galaxy should be full of intelligence and we should be able to spot the drive flares of Orion-powered ships flitting around, or the construction of Dyson spheres by the more ambitious species. We should be able to see something, at any rate.
But if the future doesn’t look like that, if there’s no way to build cost-effective fusion reactors and the only really worthwhile sustainable power source is solar, if there are hard limits on what nanotech is capable of that limit its industrial applications, and so on… the barrier to entry for a planetary civilization hoping to go galactic may be so high that even with thousands of intelligent species to make the attempt, none of them make it.
This ties back into the hypotheses I left out of my post for the sake of brevity; I’m now considering throwing them in to explain my reasoning a little better. But I’m still not sure I should do it without invitation, because they are on the long side.
It’s sticky sweet candy for the mind. Why not share it?
Here goes:
Alternate explanations for rarity of intelligence:
3a) Interstellar travel is prohibitively difficult. The fact that the galaxy isn’t obviously awash in intelligence is a sign that FTL travel is impossible or extremely unfeasible.
Barring technology indistinguishable from magic, building any kind of STL colonizer would involve a great investment of resources for a questionable return; intelligent beings might just look at the numbers and decide not to bother. At most, the typical modern civilization might send probes out to the nearest stellar neighbors. If the cost of sending a ton of cargo to Alpha Centauri is say, 0.0001% of your civilization’s annual GDP, you’re not likely to see anyone sending million-ton colony ships to Alpha Centauri. In which case intelligent life might be relatively common in the galaxy without any of it coming here; even the more ambitious cultures that actually did bother to make the trip to the nearest stars would tend to peter out over time rather than going through exponential expansion.
3b) Interstellar colonization is prohibitively difficult. If sending an STL colony expedition to another star is hard, sending one with a large enough logistics base to terraform a planet will be exponentially harder.
There are something on the order of 1000 stars within 50 to 60 light years of us. Assuming more or less uniform stellar densities, if the probability of a habitable planet appearing around any given star is much less than 0.1%, it’s likely that such planets will remain permanently out of reach for a sublight colony ship. In that case, spreading one’s civilization throughout the galaxy depends on being able to terraform planets across interstellar distances before setting up a large population on those worlds. Even if travel across short (~10 ly) interstellar distances is not prohibitively difficult, there might still be little or no incentive to colonize the available worlds beyond one’s own star system. After all, if you’re going to live in a climate-controlled bunker on an uninhabitable rock where you can’t step outside without being freeze-dried or boiled alive, you might as well do it somewhere closer to home.
NOTE: This amounts to “super-difficult life,” but it does not require that there are few intelligent species in the galaxy. If the emergence of life is (for lack of a better term) super-duper-difficult, or if most planets are inhospitable enough to make it impossible, then we could have many thousands of intelligent species in the galaxy without any of them being likely to reach each other.
3c) Interstellar colonization might be “psychologically” difficult. For instance, what if the next logical step in the evolution of modern civilization is an AI singularity, possibly coupled with some kind of uploading of consciousness into machines? Either way, our descendants of 200 years from now might well be, to our eyes, a civilization of robots. To a society of strong AIs, interstellar colonization is liable to look a little different. Traveling to even the nearest stars, you will be cut off from the rest of your civilization by a transmission gap on the order of 10^20 cycles just because of the lightspeed limit.*
That might sound like an even worse idea to them than spending a long lifetime in cryogenic storage and having a twenty year round trip communication cycle with Earth does to us. In which case they’re likely to stay at home and come up with elaborate social activities or simulations to spend their time, because interstellar colonization is just too unpleasant to bear considering.
*Assuming roughly 1 THz computing, for relatively near stellar neighbors. This estimate is probably too low, but I need some numbers and I am nowhere near an expert on artificial intelligence or the probable limits of computer technology.
For a machine-phase civilization, the only one of these that seems plausible is 3c, but I can’t think of any reason why no one in a given civilization would want to leave, and assuming growth of any kind, resource pressure alone will eventually drive expansion. If the need for civilization is so psychologically strong, copies can be shipped and revived only after specialized systems have built enough infrastructure to support them.
It seems far more likely to me, given the emergence of multiple civilizations in a galaxy, that some technical advance inevitably destroys them. Nanomedicine malfunction or singleton seem like the best bets to me just now, which would suggest that the best defenses are spreading out and technical systems’ heterogeneity.
A machine-phase civilization might still find (3a) or (3b) an issue depending on whether nanotech pans out. We think it will, but we don’t really know, and a lot of technologies turn out to be profoundly less capable than the optimists expect them to be in their infancy. Science fiction authors in the ’40s and ’50s were predicting that atomic power sources would be strongly miniaturized (amusingly, more so than computing devices); that never happened and it looks like the minimum size for a reasonably safe nuclear reactor really is a large piece of industrial machinery.
If nanotech does what its greatest enthusiasts expect, then the minimum size of industrial base you need to create a new technological civilization in a completely undeveloped solar system is low (I don’t know, probably in the 10-1000 ton range), in which case the payload for your starship is low enough that you might be able to convince people to help you build and launch it. Extremely capable nanotech also helps on the launch end by making the task of organizing the industrial resources to build the ship easier.
But if nanotech doesn’t operate at that level, if you actually need to carry machine tools and stockpiles of exotic materials unlikely to be found in asteroid belts and so on… things could be expensive enough that at any point in a civilization’s history it can think of something more interesting to do with the resources required to build an interstellar colony ship. Again, if the construction cost of the ship is an order of magnitude greater than the gross planetary product, it won’t get built, especially if very few people actually want to ride it.
Also, could you define “singleton” for me, please?
Sorry for taking so long on this; I forgot to check back using a browser that can see red envelopes (I usually read lesswrong with elinks).
I think if nanotech does what its greatest enthusiasts expect, the minimum size of the industrial base will be in the 1-10 ton range. However, if we’re assuming that level of nanotech, anyone who wants will be able to launch their own expedition, personally, without any particular help other than downloading GNU/Spaceship. If nanotech works as advertised, it turns construction into a programming project.
Also, if we limit ourselves to predictions made in the 50s with no assumptions of new science, I think we’ll find that the predictions are reasonable, technically, and the main reason we don’t have nuclear cars and basement reactors now involve politics. Molecular manufacturing probably cannot be contained this way, since it doesn’t require a limited resource that’s easy to detect from a distance.
Others have defined singleton, so I assume you’re happy with that. :)
Re: Nanotech That’s exactly my point: if nanotech performs as advertised by its starriest-eyed advocates, then interstellar colonization can be done with small payloads and energy is cheap enough that they can be launched easily. That is a very big “if,” and not one we can shrug off or assume in advance as the underlying principle of all our models.
What if nanotech turns out to have many of the same limits as its closest natural analogue, biological cells? Biotech is great for doing chemistry, but not so great for assembling industrial machinery (like large solar arrays) in a hostile environment.
As for the “nuclear cars and basement reactors” being out of the picture because of politics and not engineering, that’s… really quite impressively not true, I think. Fission reactors create neutrons that slip through most materials like a ghost and can riddle you with radiation unless you stand far away or have excellent shielding. Radioactive thermal generators require synthetic or refined isotopes that are expensive by nature because they have to be [i]made[/i], atom by atom… and they’re still quite radioactive if they’re hot enough to be a useful power source.
The real problem isn’t the atomic power source itself, it’s the shielding you need to keep it from giving you cancer. There’s no easy way to miniaturize that, because neutron capture cross-sections play no favorites and can’t be tinkered with.
This stuff is not a toy, and there are very good reasons of engineering why it never made the leap from industrial equipment to household use, except in the smallest and most trivial scales (such as americium in smoke detectors). It’s not just about politics.
‘singleton’ as I’ve seen it used seems to be one possible Singularity in which a single AI absorbs everyone and everything into itself in a single colossal entity. We’d probably consider it a Bad Ending.
See Nick Bostrom (2005). What is a Singleton?
A singleton is a more general concept than intelligence explosion. The specific case of a benevolent AGI singleton aka FAI is not a bad ending. Think of it as Nature 2.0, supervised universe, not as a dictator.
I stand corrected! Maybe this should be a wiki article—it’s not that common, but it’s awfully hard to google.
Done.
Here is another variant:
If civilizations achieve a certain sophistication, they necessarily decipher the purpose of the universe and once they understand its true meaning and that they are just a superfluous side-effect, they simply commit suicide. Here is a blog entry of mine elaborating on this hypothesis:
http://arachnism.blogspot.com/2009/05/spiritual-explanation-to-fermi-paradox.html