Everything else is way further down the totem pole.
People talk about the grey goo scenario, but I actually think that is quite silly because there is already grey goo all over the planet in the form of life. There are absolutely enormous amounts of bacteria and viruses and fungi and everything else all around us, and given the enormous advantage which would be conferred by being a grey goo from an evolutionary standpoint, we would expect the entire planet to have already been covered in the stuff—probably repeatedly. The fact that we see so much diversity—the fact that nothing CAN do this, despite enormous evolutionary incentive TO do this—suggests that grey goo scenarios are either impossible or incredibly unlikely. And that’s ignoring the thermodynamic issues which would almost certainly prevent such a scenario from occurring as well, given the necessity of reshaping whatever material into the self-replicating material, which would surely take more energy than is present in the material to begin with.
Physics experiments gone wrong have similar problems—we’ve seen supernovas. The energy released by a supernova is just vastly beyond what any sort of planetary civilization is likely capable of doing. And seeing as supernovas don’t destroy everything, it is vastly unlikely that whatever WE do will do the same. There are enormously energetic events in the universe, and the universe itself is reasonably stable—it seems unlikley that our feeble, mere planetary energy levels are going to do any better in the “destroy everything” department. And even before that, there was the Big Bang, and the universe came to exist out of that whole mess. We have the Sun, and meteoritic impact events, both of which are very powerful indeed. And yet, we don’t see exotic, earth-shattering physics coming into play there in unexpected ways. Extremely high energy densities are not likely to propagate—they’re likely to dissipate. And we see this in the universe, and in the laws of thermodynamics.
It is very easy to IMAGINE a superweapon that annihilates everything. But actually building one? Having one have realistic physics? That’s another matter entirely. Indeed, we have very strong evidence against it: surely, intelligent life has arisen elsewhere in the universe, and we would see galaxies being annihilated by high-end weaponry. We don’t see this happening. Thus we can assume with a pretty high level of confidence that such weapons do not exist or cannot be created without an implausible amount of work.
The difficult physics of interstellar travel is not to be denied, either—the best we can do with present physics is nuclear pulse propulsion, which is perhaps 10% of c and has enormous logistical issues. Anything FTL requires exotic physics which we don’t have any idea of how to create, and which may well describe situations which are not physically plausible—that is to say, the numbers may work, but there may well be no way to get there, the same as how there’s no particular reason going faster than c is impossible, but you can’t ever even REACH c, so the fact that there is a “safe space” according to the math on the other side is meaningless. Without FTL, interstellar travel is far too slow for such disasters to really propagate themselves across the galaxy—any sort of plague would die out on the planet it was created on, and even WITH FTL, it is still rather unlikely that you could easily spread something like that. Only if cheap FTL travel existed would spreading the plague be all that viable… but with cheap FTL travel, everyone else can flee it that much more easily.
My conclusion from all of this is that these sorts of estimates are less “estimates” and more “wild guesses which we pretend have some meaning, and which we throw around a lot of fancy math to convince ourselves and others that we have some idea what we’re talking about”. And that estimates like one in three million, or one in ten, are wild overestimates—and indeed, aren’t based on any logic any more sound than the guy on the daily show who said that it would either happen, or it wouldn’t, a 50% chance.
We have extremely strong evidence against galactic and universal annihilation, and there are extremely good reasons to believe that even planetary level annihilation scenarios are unlikely due to the sheer amount of energy involved. You’re looking at biocides and large rocks being diverted from their orbits to hit planets, neither of which are really trivial things to do.
The only really GOOD argument we have for lifetime limited civilizations is the url=https://en.wikipedia.org/wiki/Fermi_paradox—that is to say, where are all the bloody aliens? Unfortunately, the Fermi Paradox is a somewhat weak argument primarily because we have absolutely no idea whatsoever which side of the Great Filter we are on. That being said, if practical FTL travel exists, I would expect that to pretty much ensure that any civilization which invented it would likely simply never die because of how easy it would be to spread out, making destroying them all vastly more difficult. The galaxy would probably end up colonized and recolonized regardless of how much people fought against it.
Without FTL travel, galactic colonization is possible, but it may be impractical from an economic standpoint; there is little benefit to the home planet of having additional planets colonized—information is the only thing you could expect to really trade over interstellar distances, and even that is questionable given that locals will likely try to develop technology locally and beat you to market, so unless habitable systems are very close together duplication of effort seems extremely likely. Entertainment would thus be the largest benefit—games, novels, movies and suchlike. This MIGHT mean that colonization is unlikely, which would be another explaination… but even there, that assumes that they wouldn’t want to explore for the sake of doing so.
Of course, it is also possible we’re already on the other side of the Great Filter, and the reason we don’t see any other intelligent civilizations colonizing our galaxy is because there aren’t any, or the ones which have existed destroyed themselves earlier in their history or were incapable of progressing to the level we reached due to lack of intelligence, lack of resources, eternal, unending warfare which prevented progress, or something else.
This is why pushing for having a multiplanetary civilization is, I think, a good thing; if we hit the point where we had 4-5 extrasolar colonies, I think it would be pretty solid evidence in favor of being beyond the Great Filter. Given the dearth of evidence for interstellar disasters created by intelligent civilizations, I think that it is likely that our main concern about destroying ourselves comes until the point where we expand.
But I digress.
It isn’t impossible that we will destroy ourselves (after all, the Fermi Paradox does offer some weak evidence for it), but I will say that I find any sort of claims of numbers for the likelihood of doing so incredibly suspect, as they are very likely to be made up. And given that we have no evidence of civilizations being capable of generating galaxy-wide disasters, it seems likely that whatever disasters exist are planetary scale at best. And our lack of any sort of plausible scenarios even for that hurts even that argument. The only real evidence we have against our civilization existing indefinitely is the Fermi Paradox, but it has its own flaws. We may destroy ourselves. But until we find other civilizations, you are fooling yourself if you think you aren’t just making up numbers. Anything which destroys us outside of an impact event is likely something we cannot predict.
People talk about the grey goo scenario, but I actually think that is quite silly because there is already grey goo all over the planet in the form of life” … ” nothing CAN do this, because nothing HAS done it.”
The grey goo scenario isn’t really very silly. We seem to have had a green goo scenario around 1.5 to 2 billion years ago that killed off many or most critters around due to release of deadly deadly oxygen; if the bacterial ecosystem were completely stable against goo scenarios this wouldn’t have happened. We have had mini goo scenarios when for example microbiota pretty well adapted to one species made the jump to another and oops, started reproducing rapidly and killing off their new host species rapidly, e.g. Yersinia pestis. Just because we haven’t seen a more omnivous goo sweep over the ecosphere recently …,
…other than Homo sapiens, which is actually a pretty good example of a grey goo—think of the species as a crude mesoscale universal assembler, which is spreading pretty fast and killing off other species at a good clip and chewing up resources quite rapidly…
… doesn’t mean it couldn’t happen at the microscale also. Ask the anaerobes if you can find them, they are hiding pretty well still after the chlorophyll incident.
Since the downside is pretty far down, I don’t think complacency is called for. A reasonable caution before deploying something that could perhaps eat everyone and everything in sight seems prudent.
Remember that the planet spent almost 4 billion years more or less covered in various kind of goo before the Precambrian Explosion. We know /very little/ of the true history of life in all that time; there could have been many, many, many apocalyptic type scenarios where a new goo was deployed that spread over the planet and ate almost everything, then either died wallowing in its own crapulence or formed the base layer for a new sort of evolution.
Multicellular life could have started to evolve /thousands of times/ only to be wiped out by goo. If multicellulars only rarely got as far as bones or shells, and were more vulnerable to being wiped out by a goo-plosion than single celled critters that could rebuild their population from a few surviving pockets or spores, how would we even know? Maybe it took billions of years for the Great War Of Goo to end in a Great Compromise that allowed mesoscopic life to begin to evolve, maybe there were great distributed networks of bacterial and viral biochemical computing engines that developed intelligence far beyond our own and eventually developed altruism and peace, deciding to let multicellular life develop.
Or we eukaryotes are the stupid runaway “wet” technology grey goo of prior prokaryote/viral intelligent networks, and we /destroyed/ their networks and intelligence with our runaway reproduction. Maybe the reason we don’t see disasters like forests and cities dissolving in swarms of Andromeda-Strain like universal gobblers is that safeguards against that were either engineered in, or outlawed, long ago. Or, more conventionally, evolved.
What we /do/ think we know about the history of life is that the Earth evolved single celled life or inherited it via panspermia etc. within about half a billion years of the Earth’s coalescence, then some combination of goo more or less dominated the Earth’s surface te roost (as far as biology goes) for over three billion years, esp if you count colonies like stromatolites as gooey. In the middle of this long period was at least one thing that looked like a goo apocalypse that remade the Earth profoundly enough that the traces are very obvious (e.g. huge beds of iron ore). But there could have been many more mass extinctions we know of.
Then less than a billion years ago something changed profoundly and multicellulars started to flourish. This era is less than a sixth of the span of life on earth. So… five sixths, goo dominated world, one sixth, non goo dominated world, is the short history here. This does not fill me with confidence that our world is very stable against a new kind of goo based on non-wet, non-biochemical assemblers.
I do think we are pretty likely not to deploy grey goo, though. Not because humans are not idiots—I am an idiot, and it’s the kind of mistake I would make, and I’m demonstrably above average by many measures of intelligence. It’s just that I think Eliezer and others will deploy a pre-nanotech Friendly AI before we get to the grey goo tipping point, and that it will be smart enough, altruistic enough, and capable enough to prevent humanity from bletching the planet as badly as the green microbes did back in the day :)
You are starting from the premise that gray goo scenarios are likely, and trying to rationalize your belief.
Yes, we can be clever and think of humans as green goo—the ultimate in green goo, really. That isn’t what we’re talking about and you know it—yes, intelligent life can spread out everywhere, that isn’t what we’re worried about. We’re worried about unintelligent things wiping out intelligent things.
The great oxygenation event is not actually an example of a green goo type scenario, though it is an interesting thing to consider—I’m not sure if there even is a generalized term for that kind of scenario, as it was essentially slow atmospheric poisoning. It would be more of a generalized biocide type scenario—the cyanobacteria which caused the great oxygenation event created something which was incidentally toxic to other things, but it was purely incidental, had nothing to do with their own action, probably didn’t even benefit most of them directly (that is to say, the toxicity of the oxygen they produced probably didn’t help them personally), and what actually took over afterwards were things which were rather different from what came before, many of which were not descended from said cyanobacteria.
It was a major atmospheric change, and is (theoretically) a danger, though I’m not sure how much of an actual danger it is in the real world—we saw the atmosphere shift to an oxygen-dominated one, but I’m not sure how you’d do it again, as I’m not sure there’s something else which can be freed en-mass which is toxic—better oxygenators than oxygen are hard to come by, and by their very nature are rather difficult to liberate from an energy balance standpoint. It seems likely that our atmosphere is oxygen-based and not, say, chlorine or fluorine based for a reason arising from the physics of liberating said chemicals from chemical compounds.
As far as repeated green goo scenarios prior to 600Mya—I think that’s pretty unlikely, honestly. Looking at microbial diversity and microbial genomes, we see that the domains of life are ridiculously ancient, and that diversity goes back an enormously long distance in time. It seems very unlikely that repeated green goo type scenarios would spare the amount of diversity we actually see in the real world. Eukaryotic life arose 1.6-2.1Bya, and as far as multicellular life goes, we’ve evidence of cyanobacteria which showed signs of multicellularity 3Bya.
That’s a long, long time, and it seems unlikely that repeated gray goo scenarios are what kept life simple. It seems more likely that what kept life simple was the fact that complexity is hard—indeed, I suspect the big advancement was actually major advancements in modularity of life. The more modular life becomes, the easier it is to evolve quickly and adapt to new circumstances, but modularity from non-modularity is something which is pretty tough to sort out. Once things did sort it out, though, we saw a massive explosion in diversity. Evolving to be better at evolving is a good strategy for continuing to exist, and I suspect that complex multicelluar life only came to exist when stuff got to the point where this could happen.
If we saw repeated green goo scenarios, we’d expect the various branches of life to be pretty shallow—even if some diversity survived, we’d expect each diverse group to show a major bottleneck back at whenever the last green goo occurred. But that’s not what we actaully see. Fungi and animals diverged about 1.5 Bya, for instance, and other eukaryotic diversity occurred even prior to that. Animals have been diverging for 1.2 billion years.
It seems unlikely, then, that there have been any green goo scenarios in a very, very long time, if indeed they ever did occur. Indeed, it seems likely that life evolved to prevent said scenarios, and did so successfully, as none have occurred in a very, very, very long time.
Pestilence is not even close to green goo. Yes, introducing a new disease into a new species can be very nasty, but it almost never actually is, as most of the time, it just doesn’t work at all. Even amongst the same species, Smallpox and other old-world diseases wiped out the Native Americans, but Native American diseases were not nearly so devastating to the old-worlders.
Most things which try to jump the species barrier have a great deal of difficulty in doing so, and even when they successfully do so, their virulence ends up dropping over time because being ridiculously fatal is actually bad for their own continued propagataion. And humans have become increasingly better at stopping this sort of thing. I did note engineered plagues as the most likely technological threat, but comparing them to gray goo scenarios is very silly—pathogens are enormously easier to control. The trouble with stuff like gray goo is that it just keeps spreading, but with a pathogen, it requires a host—there are all sorts of barriers in place to pathogens, and everything is evolved to be able to deal with pathogens because they sometimes have to deal with even new ones, and things which are more likely to survive exposure to novel pathogens are more likely to pass on their genes in the long term.
With regards to “intelligent viral networks”—this is just silly. Life on earth is NOT the result of intelligence. You can tell this from our genomes. There are no signs of engineering ANYWHERE in us; no signs of intelligent design.
The gray goo is predicated on the sort of thinking common in bad scifi.
Basically, in scifi the nanotech self replicators which eat everything in their path are created in one step. As opposed to realistic depiction of technological progress where the first nanotech replicators have to sit in a batch of special nutrients and be microwaved, or otherwise provided energy, while being kept perfectly sterile (to keep bacteria from eating your nanotech). Then it’d get gradually improved in great many steps and find many uses ranging from cancer cure to dishwashers, with corresponding development in goo control methods. You don’t want your dishwasher goo eating your bread.
The levels of metabolic efficiency and sheer universality required for the gray goo to be able to eat everything in it’s path (and that’s stuff which hasn’t gotten eaten naturally), require multitude of breakthroughs on top of an incredibly advanced nanotechnology and nano-manufacturing capacity within artificial environments.
How does such an advanced civilization fight the gray goo? I can’t know what would be the best method, but a goo equivalent of bacteriophage is going to be a lot, lot less complicated than the goo itself (as the goo has to be able to metabolize a variety of foods efficiently).
Indeed, we have very strong evidence against it: surely, intelligent life has arisen elsewhere in the universe, and we would see galaxies being annihilated by high-end weaponry.
That’s a bad argument. We don’t know for sure that intelligent life has arisen. The fact that we don’t see events like that can simply mean that we are the first.
That’s a pretty weak argument due to the mediocrity principle and the sheer scale of the universe; while we certainly don’t know the values for all parts of the Drake Equation, we have a pretty good idea, at this point, that Earth-like planets are probably pretty common, and given that abiogenesis occurred very rapidly on Earth, that is weak evidence that abiogenesis isn’t hard in an absolute sense.
Most likely, the Great Filter lies somewhere in the latter half of the equation—complex, multicellular life, intelligent life, civilization, or the rapid destruction thereof. But even assuming that intelligent life only occurs in one galaxy out of every thousand, which is incredibly unlikely, that would still give us many opportunities to observe galactic destruction.
It is theoretically possible that we’re the only life in the Universe, but that is incredibly unlikely; most Universes in which life exists will have life exist in more than one place.
given that abiogenesis occurred very rapidly on Earth, that is weak evidence that abiogenesis isn’t hard in an absolute sense.
We don’t even know that it occurred on earth at all. It might have occurred elsewhere in our galaxy and traveled to earth via asteroids.
most Universes in which life exists will have life exist in more than one place.
Why? I don’t see any reason why that should be the case. If you take for example posts that internet forum users write most of the time most users who write posts only write one post.
Most planets and stars in the universe are not in our galaxy. If our galaxy has a bit of unicellular life because some very rare event happened and is the only galaxy with life, that fits to a universe where we are the only intelligent species.
After reading through all of the comments, I think I may have failed to address your central point here.
Your central point seems to be “a rational agent should take a risk that might result in universal destruction in exchange for increased utility”.
The problem here is I’m not sure that this is even a meaningful argument to begin with. Obviously universal destruction is extremely bad, but the problem is that utility probably includes all life NOT being extinguished. Or, in other words, this isn’t necessarily a meaningful calculation if we assume that the alternative makes it more likely that universal annihilation will occur.
Say the Nazis gain an excessive amount of power. What happens then? Well, there’s the risk that they make some sort of plague to cleanse humanity, screw it up, and wipe everyone out. That scenario seems MORE likely in a Nazi-run world than one which isn’t. And—let’s face it—chances are the Nazis will try and develop nuclear weapons, too, so at best you only bought a few years. And if the wrong people develop them first, you’re in a lot of trouble. So the fact of the matter is that the risk is going to be taken regardless, which further diminishes the loss of utility you could expect from universal annihilation—sooner or later, someone is going to do it, and if it isn’t you, then it will be someone else who gains whatever benefits there are from it.
The higher utility situation likely decreases the future odds of universal annihilation, meaning that, in other words, it is entirely rational to take that risk simply because the odds of destroying the world NOW are less than the odds of the world being destroyed further on down the line by someone else if you don’t make this decision, especially if you can be reasonably certain someone else is going to try it out anyway. And given the odds are incredibly low, it is a lot less meaningful of a choice to begin with.
Incidentally, regarding some other things in here:
[quote]They thought that just before World War I. But that’s not my final rejection. Evolutionary arguments are a more powerful reason to believe that people will continue to have conflicts. Those that avoid conflict will be out-competed by those that do not.[/quote]
There’s actually a pretty good counter-argument to this, namely the fact that capital is vastly easier to destroy than it is to create, and that, thusly, an area which avoids conflict has an enormous advantage over one that doesn’t because it maintains more of its capital. As capital becomes increasingly important, conflict—at least, violent, capital-destroying conflict—becomes massively less beneficial to the perpetrator of said conflict, doubly so when they actually also likely benefit from the capital contained in other nations as well due to trade.
And that’s ignoring the fact that we’ve already sort of engineered a global scenario where “The West” (the US, Canada, Japan, South Korea, Taiwan, Australia, New Zealand, and Western Europe, creeping now as far east as Poland) never attack each other, and slowly make everyone else in the world more like them. It is group selection of a sort, and it seems to be working pretty well. These countries defend their capital, and each others’ capital, benefit from each others’ capital, and engage soley in non-violent conflict with each other. If you threaten them, they crush you and make you more like them; even if you don’t, they work to corrupt you to make you more like them. Indeed, even places like China are slowly being corrupted to be more like the West.
The more that sort of thing happens, the less likely violent conflict becomes because it is simply less beneficial, and indeed, there is even some evidence to suggest we are being selected for docility—in “the West” we’ve seen crime rates and homicide rates decline for 20+ years now.
As a final, random aside:
My favorite thing about the Trinity test was the scientist who was taking side bets on the annihilation of the entire state of New Mexico, right in front of the governor of said state, who I’m sure was absolutely horrified.
the fact that capital is vastly easier to destroy than it is to create
Capital is also easier to capture than it is to create. Your argument looks like saying that it’s better to avoid wars than to lose them. Well, yeah. But what about winning wars?
we’ve already sort of engineered a global scenario where “The West” … never attack each other
In which meaning are you using the word “never”? :-D
The problem is that asymmetric warfare, which is the best way to win a war, is the worst way to acquire capital. Cruise missiles and drones are excellent for winning without any risk at all, but they’re not good for actually keeping the capital you are trying to take intact.
Spying, subversion, and purchasing are far cheaper, safer, and more effective means of capturing capital than violence.
As far as “never” goes—the last time any two “Western” countries were at war was World War II, which was more or less when the “West” came to be in the first place. It isn’t the longest of time spans, but over time armed conflict in Europe has greatly diminished and been pushed further and further east.
The problem is that asymmetric warfare, which is the best way to win a war, is the worst way to acquire capital.
The best way to win a war is to have an overwhelming advantage. That sort is situation is much better described by the word “lopsided”. Asymmetric warfare is something different.
Example: Iraqi invasion of Kuwait.
Spying, subversion, and purchasing are far cheaper, safer, and more effective means of capturing capital than violence.
Spying can capture technology, but technology is not the same thing as capital. Neither subversion nor purchasing are “means of capturing capital” at all. Subversion destroys capital and purchases are exchanges of assets.
As far as “never” goes—the last time any two “Western” countries were at war was World War II, which was more or less when the “West” came to be in the first place.
That’s an unusual idea of the West. It looks to me like it was custom-made to fit your thesis.
Can you provide a definition? One sufficiently precise to be able to allocate countries like Poland, Israel, Chile, British Virgin Islands, Estonia, etc. to either “West” or “not-West”.
Depends on the capital. Doesn’t work too well for infrastructure and human capital, and the west has plenty of those anyway. What the west is insecure about is energy,and it seems that a combination of diplomacy, threat and proxy warfare is a more efficient way to keep it flowing than all out capture.
The example of von Braun and co crossed my mind. But that was something of a side effect. Fighting a war specifically to capture a smallish numbers of smart people is frought with risks.
Incidentally, you can blockquote paragraphs by putting > in front of them, and you can find other help by clicking the “Show Help” button to the bottom right of the text box. (I have no clue why it’s all the way over there; it makes it way less visible.)
There’s actually a pretty good counter-argument to this, namely the fact that capital is vastly easier to destroy than it is to create, and that, thusly, an area which avoids conflict has an enormous advantage over one that doesn’t because it maintains more of its capital.
But, the more conflict avoidant the agents in an area, the more there is to gain from being an agent that seeks conflict.
The more conflict avoidant the agents in an area, the more there is to gain from being an agent that seeks conflict.
This is only true if the conflict avoidance is innate and is not instead a form of reciprocal altruism.
Reciprocal altruism is an ESS where pure altruism is not because you cannot take advantage of it in this way; if you become belligerent, then everyone else turns on you and you lose. Thus, it is never to your advantage to become belligerent.
Agreed. The word ‘avoid’ and the group selection-y argument made me think it was a good idea to raise that objection and make sure we were discussing reciprocal pacifists, not pure pacifists.
Everything else is way further down the totem pole.
People talk about the grey goo scenario, but I actually think that is quite silly because there is already grey goo all over the planet in the form of life. There are absolutely enormous amounts of bacteria and viruses and fungi and everything else all around us, and given the enormous advantage which would be conferred by being a grey goo from an evolutionary standpoint, we would expect the entire planet to have already been covered in the stuff—probably repeatedly. The fact that we see so much diversity—the fact that nothing CAN do this, despite enormous evolutionary incentive TO do this—suggests that grey goo scenarios are either impossible or incredibly unlikely. And that’s ignoring the thermodynamic issues which would almost certainly prevent such a scenario from occurring as well, given the necessity of reshaping whatever material into the self-replicating material, which would surely take more energy than is present in the material to begin with.
Physics experiments gone wrong have similar problems—we’ve seen supernovas. The energy released by a supernova is just vastly beyond what any sort of planetary civilization is likely capable of doing. And seeing as supernovas don’t destroy everything, it is vastly unlikely that whatever WE do will do the same. There are enormously energetic events in the universe, and the universe itself is reasonably stable—it seems unlikley that our feeble, mere planetary energy levels are going to do any better in the “destroy everything” department. And even before that, there was the Big Bang, and the universe came to exist out of that whole mess. We have the Sun, and meteoritic impact events, both of which are very powerful indeed. And yet, we don’t see exotic, earth-shattering physics coming into play there in unexpected ways. Extremely high energy densities are not likely to propagate—they’re likely to dissipate. And we see this in the universe, and in the laws of thermodynamics.
It is very easy to IMAGINE a superweapon that annihilates everything. But actually building one? Having one have realistic physics? That’s another matter entirely. Indeed, we have very strong evidence against it: surely, intelligent life has arisen elsewhere in the universe, and we would see galaxies being annihilated by high-end weaponry. We don’t see this happening. Thus we can assume with a pretty high level of confidence that such weapons do not exist or cannot be created without an implausible amount of work.
The difficult physics of interstellar travel is not to be denied, either—the best we can do with present physics is nuclear pulse propulsion, which is perhaps 10% of c and has enormous logistical issues. Anything FTL requires exotic physics which we don’t have any idea of how to create, and which may well describe situations which are not physically plausible—that is to say, the numbers may work, but there may well be no way to get there, the same as how there’s no particular reason going faster than c is impossible, but you can’t ever even REACH c, so the fact that there is a “safe space” according to the math on the other side is meaningless. Without FTL, interstellar travel is far too slow for such disasters to really propagate themselves across the galaxy—any sort of plague would die out on the planet it was created on, and even WITH FTL, it is still rather unlikely that you could easily spread something like that. Only if cheap FTL travel existed would spreading the plague be all that viable… but with cheap FTL travel, everyone else can flee it that much more easily.
My conclusion from all of this is that these sorts of estimates are less “estimates” and more “wild guesses which we pretend have some meaning, and which we throw around a lot of fancy math to convince ourselves and others that we have some idea what we’re talking about”. And that estimates like one in three million, or one in ten, are wild overestimates—and indeed, aren’t based on any logic any more sound than the guy on the daily show who said that it would either happen, or it wouldn’t, a 50% chance.
We have extremely strong evidence against galactic and universal annihilation, and there are extremely good reasons to believe that even planetary level annihilation scenarios are unlikely due to the sheer amount of energy involved. You’re looking at biocides and large rocks being diverted from their orbits to hit planets, neither of which are really trivial things to do.
It is basically a case of http://tvtropes.org/pmwiki/pmwiki.php/Main/ScifiWritersHaveNoSenseOfScale, except applied in a much more pessimistic manner.
The only really GOOD argument we have for lifetime limited civilizations is the url=https://en.wikipedia.org/wiki/Fermi_paradox—that is to say, where are all the bloody aliens? Unfortunately, the Fermi Paradox is a somewhat weak argument primarily because we have absolutely no idea whatsoever which side of the Great Filter we are on. That being said, if practical FTL travel exists, I would expect that to pretty much ensure that any civilization which invented it would likely simply never die because of how easy it would be to spread out, making destroying them all vastly more difficult. The galaxy would probably end up colonized and recolonized regardless of how much people fought against it.
Without FTL travel, galactic colonization is possible, but it may be impractical from an economic standpoint; there is little benefit to the home planet of having additional planets colonized—information is the only thing you could expect to really trade over interstellar distances, and even that is questionable given that locals will likely try to develop technology locally and beat you to market, so unless habitable systems are very close together duplication of effort seems extremely likely. Entertainment would thus be the largest benefit—games, novels, movies and suchlike. This MIGHT mean that colonization is unlikely, which would be another explaination… but even there, that assumes that they wouldn’t want to explore for the sake of doing so.
Of course, it is also possible we’re already on the other side of the Great Filter, and the reason we don’t see any other intelligent civilizations colonizing our galaxy is because there aren’t any, or the ones which have existed destroyed themselves earlier in their history or were incapable of progressing to the level we reached due to lack of intelligence, lack of resources, eternal, unending warfare which prevented progress, or something else.
This is why pushing for having a multiplanetary civilization is, I think, a good thing; if we hit the point where we had 4-5 extrasolar colonies, I think it would be pretty solid evidence in favor of being beyond the Great Filter. Given the dearth of evidence for interstellar disasters created by intelligent civilizations, I think that it is likely that our main concern about destroying ourselves comes until the point where we expand.
But I digress.
It isn’t impossible that we will destroy ourselves (after all, the Fermi Paradox does offer some weak evidence for it), but I will say that I find any sort of claims of numbers for the likelihood of doing so incredibly suspect, as they are very likely to be made up. And given that we have no evidence of civilizations being capable of generating galaxy-wide disasters, it seems likely that whatever disasters exist are planetary scale at best. And our lack of any sort of plausible scenarios even for that hurts even that argument. The only real evidence we have against our civilization existing indefinitely is the Fermi Paradox, but it has its own flaws. We may destroy ourselves. But until we find other civilizations, you are fooling yourself if you think you aren’t just making up numbers. Anything which destroys us outside of an impact event is likely something we cannot predict.
The grey goo scenario isn’t really very silly. We seem to have had a green goo scenario around 1.5 to 2 billion years ago that killed off many or most critters around due to release of deadly deadly oxygen; if the bacterial ecosystem were completely stable against goo scenarios this wouldn’t have happened. We have had mini goo scenarios when for example microbiota pretty well adapted to one species made the jump to another and oops, started reproducing rapidly and killing off their new host species rapidly, e.g. Yersinia pestis. Just because we haven’t seen a more omnivous goo sweep over the ecosphere recently …, …other than Homo sapiens, which is actually a pretty good example of a grey goo—think of the species as a crude mesoscale universal assembler, which is spreading pretty fast and killing off other species at a good clip and chewing up resources quite rapidly… … doesn’t mean it couldn’t happen at the microscale also. Ask the anaerobes if you can find them, they are hiding pretty well still after the chlorophyll incident.
Since the downside is pretty far down, I don’t think complacency is called for. A reasonable caution before deploying something that could perhaps eat everyone and everything in sight seems prudent.
Remember that the planet spent almost 4 billion years more or less covered in various kind of goo before the Precambrian Explosion. We know /very little/ of the true history of life in all that time; there could have been many, many, many apocalyptic type scenarios where a new goo was deployed that spread over the planet and ate almost everything, then either died wallowing in its own crapulence or formed the base layer for a new sort of evolution.
Multicellular life could have started to evolve /thousands of times/ only to be wiped out by goo. If multicellulars only rarely got as far as bones or shells, and were more vulnerable to being wiped out by a goo-plosion than single celled critters that could rebuild their population from a few surviving pockets or spores, how would we even know? Maybe it took billions of years for the Great War Of Goo to end in a Great Compromise that allowed mesoscopic life to begin to evolve, maybe there were great distributed networks of bacterial and viral biochemical computing engines that developed intelligence far beyond our own and eventually developed altruism and peace, deciding to let multicellular life develop.
Or we eukaryotes are the stupid runaway “wet” technology grey goo of prior prokaryote/viral intelligent networks, and we /destroyed/ their networks and intelligence with our runaway reproduction. Maybe the reason we don’t see disasters like forests and cities dissolving in swarms of Andromeda-Strain like universal gobblers is that safeguards against that were either engineered in, or outlawed, long ago. Or, more conventionally, evolved.
What we /do/ think we know about the history of life is that the Earth evolved single celled life or inherited it via panspermia etc. within about half a billion years of the Earth’s coalescence, then some combination of goo more or less dominated the Earth’s surface te roost (as far as biology goes) for over three billion years, esp if you count colonies like stromatolites as gooey. In the middle of this long period was at least one thing that looked like a goo apocalypse that remade the Earth profoundly enough that the traces are very obvious (e.g. huge beds of iron ore). But there could have been many more mass extinctions we know of.
Then less than a billion years ago something changed profoundly and multicellulars started to flourish. This era is less than a sixth of the span of life on earth. So… five sixths, goo dominated world, one sixth, non goo dominated world, is the short history here. This does not fill me with confidence that our world is very stable against a new kind of goo based on non-wet, non-biochemical assemblers.
I do think we are pretty likely not to deploy grey goo, though. Not because humans are not idiots—I am an idiot, and it’s the kind of mistake I would make, and I’m demonstrably above average by many measures of intelligence. It’s just that I think Eliezer and others will deploy a pre-nanotech Friendly AI before we get to the grey goo tipping point, and that it will be smart enough, altruistic enough, and capable enough to prevent humanity from bletching the planet as badly as the green microbes did back in the day :)
You are starting from the premise that gray goo scenarios are likely, and trying to rationalize your belief.
Yes, we can be clever and think of humans as green goo—the ultimate in green goo, really. That isn’t what we’re talking about and you know it—yes, intelligent life can spread out everywhere, that isn’t what we’re worried about. We’re worried about unintelligent things wiping out intelligent things.
The great oxygenation event is not actually an example of a green goo type scenario, though it is an interesting thing to consider—I’m not sure if there even is a generalized term for that kind of scenario, as it was essentially slow atmospheric poisoning. It would be more of a generalized biocide type scenario—the cyanobacteria which caused the great oxygenation event created something which was incidentally toxic to other things, but it was purely incidental, had nothing to do with their own action, probably didn’t even benefit most of them directly (that is to say, the toxicity of the oxygen they produced probably didn’t help them personally), and what actually took over afterwards were things which were rather different from what came before, many of which were not descended from said cyanobacteria.
It was a major atmospheric change, and is (theoretically) a danger, though I’m not sure how much of an actual danger it is in the real world—we saw the atmosphere shift to an oxygen-dominated one, but I’m not sure how you’d do it again, as I’m not sure there’s something else which can be freed en-mass which is toxic—better oxygenators than oxygen are hard to come by, and by their very nature are rather difficult to liberate from an energy balance standpoint. It seems likely that our atmosphere is oxygen-based and not, say, chlorine or fluorine based for a reason arising from the physics of liberating said chemicals from chemical compounds.
As far as repeated green goo scenarios prior to 600Mya—I think that’s pretty unlikely, honestly. Looking at microbial diversity and microbial genomes, we see that the domains of life are ridiculously ancient, and that diversity goes back an enormously long distance in time. It seems very unlikely that repeated green goo type scenarios would spare the amount of diversity we actually see in the real world. Eukaryotic life arose 1.6-2.1Bya, and as far as multicellular life goes, we’ve evidence of cyanobacteria which showed signs of multicellularity 3Bya.
That’s a long, long time, and it seems unlikely that repeated gray goo scenarios are what kept life simple. It seems more likely that what kept life simple was the fact that complexity is hard—indeed, I suspect the big advancement was actually major advancements in modularity of life. The more modular life becomes, the easier it is to evolve quickly and adapt to new circumstances, but modularity from non-modularity is something which is pretty tough to sort out. Once things did sort it out, though, we saw a massive explosion in diversity. Evolving to be better at evolving is a good strategy for continuing to exist, and I suspect that complex multicelluar life only came to exist when stuff got to the point where this could happen.
If we saw repeated green goo scenarios, we’d expect the various branches of life to be pretty shallow—even if some diversity survived, we’d expect each diverse group to show a major bottleneck back at whenever the last green goo occurred. But that’s not what we actaully see. Fungi and animals diverged about 1.5 Bya, for instance, and other eukaryotic diversity occurred even prior to that. Animals have been diverging for 1.2 billion years.
It seems unlikely, then, that there have been any green goo scenarios in a very, very long time, if indeed they ever did occur. Indeed, it seems likely that life evolved to prevent said scenarios, and did so successfully, as none have occurred in a very, very, very long time.
Pestilence is not even close to green goo. Yes, introducing a new disease into a new species can be very nasty, but it almost never actually is, as most of the time, it just doesn’t work at all. Even amongst the same species, Smallpox and other old-world diseases wiped out the Native Americans, but Native American diseases were not nearly so devastating to the old-worlders.
Most things which try to jump the species barrier have a great deal of difficulty in doing so, and even when they successfully do so, their virulence ends up dropping over time because being ridiculously fatal is actually bad for their own continued propagataion. And humans have become increasingly better at stopping this sort of thing. I did note engineered plagues as the most likely technological threat, but comparing them to gray goo scenarios is very silly—pathogens are enormously easier to control. The trouble with stuff like gray goo is that it just keeps spreading, but with a pathogen, it requires a host—there are all sorts of barriers in place to pathogens, and everything is evolved to be able to deal with pathogens because they sometimes have to deal with even new ones, and things which are more likely to survive exposure to novel pathogens are more likely to pass on their genes in the long term.
With regards to “intelligent viral networks”—this is just silly. Life on earth is NOT the result of intelligence. You can tell this from our genomes. There are no signs of engineering ANYWHERE in us; no signs of intelligent design.
The gray goo is predicated on the sort of thinking common in bad scifi.
Basically, in scifi the nanotech self replicators which eat everything in their path are created in one step. As opposed to realistic depiction of technological progress where the first nanotech replicators have to sit in a batch of special nutrients and be microwaved, or otherwise provided energy, while being kept perfectly sterile (to keep bacteria from eating your nanotech). Then it’d get gradually improved in great many steps and find many uses ranging from cancer cure to dishwashers, with corresponding development in goo control methods. You don’t want your dishwasher goo eating your bread.
The levels of metabolic efficiency and sheer universality required for the gray goo to be able to eat everything in it’s path (and that’s stuff which hasn’t gotten eaten naturally), require multitude of breakthroughs on top of an incredibly advanced nanotechnology and nano-manufacturing capacity within artificial environments.
How does such an advanced civilization fight the gray goo? I can’t know what would be the best method, but a goo equivalent of bacteriophage is going to be a lot, lot less complicated than the goo itself (as the goo has to be able to metabolize a variety of foods efficiently).
Please add something like this to the RW nanotech article!
That’s a bad argument. We don’t know for sure that intelligent life has arisen. The fact that we don’t see events like that can simply mean that we are the first.
That’s a pretty weak argument due to the mediocrity principle and the sheer scale of the universe; while we certainly don’t know the values for all parts of the Drake Equation, we have a pretty good idea, at this point, that Earth-like planets are probably pretty common, and given that abiogenesis occurred very rapidly on Earth, that is weak evidence that abiogenesis isn’t hard in an absolute sense.
Most likely, the Great Filter lies somewhere in the latter half of the equation—complex, multicellular life, intelligent life, civilization, or the rapid destruction thereof. But even assuming that intelligent life only occurs in one galaxy out of every thousand, which is incredibly unlikely, that would still give us many opportunities to observe galactic destruction.
It is theoretically possible that we’re the only life in the Universe, but that is incredibly unlikely; most Universes in which life exists will have life exist in more than one place.
We don’t even know that it occurred on earth at all. It might have occurred elsewhere in our galaxy and traveled to earth via asteroids.
Why? I don’t see any reason why that should be the case. If you take for example posts that internet forum users write most of the time most users who write posts only write one post.
That would make it more likely that there’s life on other planets, not less likely.
Most planets and stars in the universe are not in our galaxy. If our galaxy has a bit of unicellular life because some very rare event happened and is the only galaxy with life, that fits to a universe where we are the only intelligent species.
It looks like you accidentally submitted your comment before finishing it (or there’s a misformatted link or something).
I corrected it.
After reading through all of the comments, I think I may have failed to address your central point here.
Your central point seems to be “a rational agent should take a risk that might result in universal destruction in exchange for increased utility”.
The problem here is I’m not sure that this is even a meaningful argument to begin with. Obviously universal destruction is extremely bad, but the problem is that utility probably includes all life NOT being extinguished. Or, in other words, this isn’t necessarily a meaningful calculation if we assume that the alternative makes it more likely that universal annihilation will occur.
Say the Nazis gain an excessive amount of power. What happens then? Well, there’s the risk that they make some sort of plague to cleanse humanity, screw it up, and wipe everyone out. That scenario seems MORE likely in a Nazi-run world than one which isn’t. And—let’s face it—chances are the Nazis will try and develop nuclear weapons, too, so at best you only bought a few years. And if the wrong people develop them first, you’re in a lot of trouble. So the fact of the matter is that the risk is going to be taken regardless, which further diminishes the loss of utility you could expect from universal annihilation—sooner or later, someone is going to do it, and if it isn’t you, then it will be someone else who gains whatever benefits there are from it.
The higher utility situation likely decreases the future odds of universal annihilation, meaning that, in other words, it is entirely rational to take that risk simply because the odds of destroying the world NOW are less than the odds of the world being destroyed further on down the line by someone else if you don’t make this decision, especially if you can be reasonably certain someone else is going to try it out anyway. And given the odds are incredibly low, it is a lot less meaningful of a choice to begin with.
Incidentally, regarding some other things in here:
[quote]They thought that just before World War I. But that’s not my final rejection. Evolutionary arguments are a more powerful reason to believe that people will continue to have conflicts. Those that avoid conflict will be out-competed by those that do not.[/quote]
There’s actually a pretty good counter-argument to this, namely the fact that capital is vastly easier to destroy than it is to create, and that, thusly, an area which avoids conflict has an enormous advantage over one that doesn’t because it maintains more of its capital. As capital becomes increasingly important, conflict—at least, violent, capital-destroying conflict—becomes massively less beneficial to the perpetrator of said conflict, doubly so when they actually also likely benefit from the capital contained in other nations as well due to trade.
And that’s ignoring the fact that we’ve already sort of engineered a global scenario where “The West” (the US, Canada, Japan, South Korea, Taiwan, Australia, New Zealand, and Western Europe, creeping now as far east as Poland) never attack each other, and slowly make everyone else in the world more like them. It is group selection of a sort, and it seems to be working pretty well. These countries defend their capital, and each others’ capital, benefit from each others’ capital, and engage soley in non-violent conflict with each other. If you threaten them, they crush you and make you more like them; even if you don’t, they work to corrupt you to make you more like them. Indeed, even places like China are slowly being corrupted to be more like the West.
The more that sort of thing happens, the less likely violent conflict becomes because it is simply less beneficial, and indeed, there is even some evidence to suggest we are being selected for docility—in “the West” we’ve seen crime rates and homicide rates decline for 20+ years now.
As a final, random aside:
My favorite thing about the Trinity test was the scientist who was taking side bets on the annihilation of the entire state of New Mexico, right in front of the governor of said state, who I’m sure was absolutely horrified.
Capital is also easier to capture than it is to create. Your argument looks like saying that it’s better to avoid wars than to lose them. Well, yeah. But what about winning wars?
In which meaning are you using the word “never”? :-D
The problem is that asymmetric warfare, which is the best way to win a war, is the worst way to acquire capital. Cruise missiles and drones are excellent for winning without any risk at all, but they’re not good for actually keeping the capital you are trying to take intact.
Spying, subversion, and purchasing are far cheaper, safer, and more effective means of capturing capital than violence.
As far as “never” goes—the last time any two “Western” countries were at war was World War II, which was more or less when the “West” came to be in the first place. It isn’t the longest of time spans, but over time armed conflict in Europe has greatly diminished and been pushed further and further east.
The best way to win a war is to have an overwhelming advantage. That sort is situation is much better described by the word “lopsided”. Asymmetric warfare is something different.
Example: Iraqi invasion of Kuwait.
Spying can capture technology, but technology is not the same thing as capital. Neither subversion nor purchasing are “means of capturing capital” at all. Subversion destroys capital and purchases are exchanges of assets.
That’s an unusual idea of the West. It looks to me like it was custom-made to fit your thesis.
Can you provide a definition? One sufficiently precise to be able to allocate countries like Poland, Israel, Chile, British Virgin Islands, Estonia, etc. to either “West” or “not-West”.
Depends on the capital. Doesn’t work too well for infrastructure and human capital, and the west has plenty of those anyway. What the west is insecure about is energy,and it seems that a combination of diplomacy, threat and proxy warfare is a more efficient way to keep it flowing than all out capture.
Depends on the human capital. Look at the history of the US space program :-/
At the moment. I’m wary of evolutionary arguments based on a few decades worth of data.
The example of von Braun and co crossed my mind. But that was something of a side effect. Fighting a war specifically to capture a smallish numbers of smart people is frought with risks.
Opportunistic seizure of capital is to be expected in a war fought for any purpose.
Incidentally, you can blockquote paragraphs by putting > in front of them, and you can find other help by clicking the “Show Help” button to the bottom right of the text box. (I have no clue why it’s all the way over there; it makes it way less visible.)
But, the more conflict avoidant the agents in an area, the more there is to gain from being an agent that seeks conflict.
This is only true if the conflict avoidance is innate and is not instead a form of reciprocal altruism.
Reciprocal altruism is an ESS where pure altruism is not because you cannot take advantage of it in this way; if you become belligerent, then everyone else turns on you and you lose. Thus, it is never to your advantage to become belligerent.
Agreed. The word ‘avoid’ and the group selection-y argument made me think it was a good idea to raise that objection and make sure we were discussing reciprocal pacifists, not pure pacifists.