Robin Hanson’s Grabby Aliens is a succinct model of how and when technological life will spread. It argues that we’ve simply arrived too early in the universe’s lifespan for other civilizations to have grown to the point of being visible to us yet, but they are there, and eventually, most of us will give rise to large enough civilizations that we’ll start to run into each other.
The paper gave me the impression that this was kind of going to be a bad thing, for us, because it means there will be these rapacious colonizers penning us in on every side and forbidding us from rapaciously colonizing as much of the accessible universe as we otherwise would have liked to. I will argue that having neighbors might actually be really good, and then I will argue that it may also be extremely bad, in a way that I don’t think the article touched on.
This could be Very Good
It may be easier to travel at light speed if someone has already built a radio receiver at the other side of the accessible universe, you can travel as a signal, instead of having to physically boost a signal receiver (and programmable nanomanufacturer) around and somehow decelerate it once it arrives so that it lands in tact on a material source. Moving as signals might be much easier than moving physical objects.
Unless there is some other way of traveling through a dead universe at light speed that doesn’t require moving physical objects?
Is it possible to shoot a laser just right so that if it hits a certain common kind of mineral it can cause the matter to melt and oscillate into a form that can then receive a more subtle sort of signal that will cause it to melt and oscillate into a form that can eventually produce a form that receives high bandwidth signal that can manufacture an entire person?
And, what are the most complex chemicals that would survive a light speed collision with a suitable substrate? Could we supply the reaction with our own reagents? Could we hail down a searing heavenfall of primordial soup?
There was a great-soul, a Jesuit priest, and a Deist (by my reckoning), Teilhard de Chardin, who wrote of the Omega Point, essentially he was describing technological singularity. “He imagined that once the Omega Point had been reached, life might finally move off the planet in an explosive beam of light, heading out to colonize the universe.” I learned of this after musing of this searing heavenfall. I felt the bonds of recurring invention linking our thoughtforms.
The more I think about those things the more plausible it seems that this would definitely be within reach of superintelligence.
I seem to recall that in his book Life 3.0, Max Tegmark speculated that radio transmissions may be the fastest way for an aggessive civilization to expand, with an expansion speed not far from c. The idea is that expanding civilization E sends messages which fool a (newcomer) receiving civilization R into building something, which turns out to seed a new center of expansion for E (and I don’t think it ends well for R, though perhaps that is my embellishment).
It could be argued that we (for a particular sense of ‘we’) actually receive more territory as a result of an abundance of life, a universe teeming with life will have more regions allocated to humanlike species than one where life is extremely rare and regularly runs up against its affectability boundaries, which would leave most of space empty!
To put it another way, as is covered in that paper, The Edges of Our Universe, if we were to have the universe all to ourselves, this would not mean that we’d get to colonize the entire thing! The maximum theoretical space we can cover before accelerating cosmological expansion makes further travel impossible would actually only cover about 20 billion galaxies! If we did assume that life was so rare that their affectability volumes barely ever overlap, that would end up leaving most of space devoid of intelligent life, which is probably not to human preference! A universe where life is abundant would actually allocate more space to humans and their counterparts, because although each one would receive a much smaller volume, there would be far disproportionately more of them per unit of space because they’re everywhere and they’re more densely packed.
Even if we had no counterparts in an infinite universe who we’d recognize as kin (impossible), I think we’d still generally prefer that the universe be full of whatever life, over it just being us and a whole lot of emptiness.
It’s not as Bad as You Might Think
Humans seem to empathize well with non-human animals. We might find it even easier to empathize with technology-using non-human animals, especially if both sides have technologies that allow them to remake their minds to ease communication and mutual understanding. The original biological parts of ourselves might turn out to be unimportant relative to the parts of ourselves that we build. It might be that most of what we become will be held in common with the other species, and that our small original differences become mutually delightful. Ultimately, the end state would be barely any less happy than if we had populated the entire universe alone.
Or Maybe it’s a Lot Worse than You Might Expect
If you evolve intelligent life enough times, it may become overwhelmingly probable that we’ll get a civilization, or a civilization containing a sufficiently powerful group, that’s interested in annihilating all life everywhere, as far as they can go. It may be totally possible for a so driven civilization to trigger false vacuum decay, or something like that, which would kill many many other civilizations before accelerating cosmological expansion contains it. There is no defense against this sort of “bubble of death”, as Tegmark calls it. If this tends to happen often enough, then there might end up being very little life left, in the end. It would give us a neat resolution to the Doomsday Argument (edit: As Donald Hobson notes in the comments, no it doesn’t, death bubbles leave us with enough time that there is still a youngness paradox, although this does seem to make it a bit less paradoxical.), if it turned out that life-supporting universes tend to be robust under natural circumstances, but not robust to the technologies of highly developed technological civilizations. It would explain why we find ourselves in this early era, even though we would naively expect most anthropic moments to occur in the much larger civilizations later on.
Ultimately, though, it does not matter whether the abundance of technological life is “good” or “bad”. It begins outside of our lightcone. There is absolutely nothing any of us could have done to prevent it. To experience an impulse to deem such a thing as “good” or “bad” and to feel regretful or elated about it might just be a sort of neurosis.
Regardless, I hope my musings here will be helpful to the en-fleshing of thy eschatology, good reader.
Grabby Aliens could be Good, could be Bad
Robin Hanson’s Grabby Aliens is a succinct model of how and when technological life will spread. It argues that we’ve simply arrived too early in the universe’s lifespan for other civilizations to have grown to the point of being visible to us yet, but they are there, and eventually, most of us will give rise to large enough civilizations that we’ll start to run into each other.
The paper gave me the impression that this was kind of going to be a bad thing, for us, because it means there will be these rapacious colonizers penning us in on every side and forbidding us from rapaciously colonizing as much of the accessible universe as we otherwise would have liked to. I will argue that having neighbors might actually be really good, and then I will argue that it may also be extremely bad, in a way that I don’t think the article touched on.
This could be Very Good
It may be easier to travel at light speed if someone has already built a radio receiver at the other side of the accessible universe, you can travel as a signal, instead of having to physically boost a signal receiver (and programmable nanomanufacturer) around and somehow decelerate it once it arrives so that it lands in tact on a material source. Moving as signals might be much easier than moving physical objects.
Unless there is some other way of traveling through a dead universe at light speed that doesn’t require moving physical objects?
Is it possible to shoot a laser just right so that if it hits a certain common kind of mineral it can cause the matter to melt and oscillate into a form that can then receive a more subtle sort of signal that will cause it to melt and oscillate into a form that can eventually produce a form that receives high bandwidth signal that can manufacture an entire person?
And, what are the most complex chemicals that would survive a light speed collision with a suitable substrate? Could we supply the reaction with our own reagents? Could we hail down a searing heavenfall of primordial soup?
There was a great-soul, a Jesuit priest, and a Deist (by my reckoning), Teilhard de Chardin, who wrote of the Omega Point, essentially he was describing technological singularity. “He imagined that once the Omega Point had been reached, life might finally move off the planet in an explosive beam of light, heading out to colonize the universe.” I learned of this after musing of this searing heavenfall. I felt the bonds of recurring invention linking our thoughtforms.
The more I think about those things the more plausible it seems that this would definitely be within reach of superintelligence.
arch1 from shtetloptimized says
I seem to recall that in his book Life 3.0, Max Tegmark speculated that radio transmissions may be the fastest way for an aggessive civilization to expand, with an expansion speed not far from c. The idea is that expanding civilization E sends messages which fool a (newcomer) receiving civilization R into building something, which turns out to seed a new center of expansion for E (and I don’t think it ends well for R, though perhaps that is my embellishment).
It could be argued that we (for a particular sense of ‘we’) actually receive more territory as a result of an abundance of life, a universe teeming with life will have more regions allocated to humanlike species than one where life is extremely rare and regularly runs up against its affectability boundaries, which would leave most of space empty!
To put it another way, as is covered in that paper, The Edges of Our Universe, if we were to have the universe all to ourselves, this would not mean that we’d get to colonize the entire thing!
The maximum theoretical space we can cover before accelerating cosmological expansion makes further travel impossible would actually only cover about 20 billion galaxies! If we did assume that life was so rare that their affectability volumes barely ever overlap, that would end up leaving most of space devoid of intelligent life, which is probably not to human preference! A universe where life is abundant would actually allocate more space to humans and their counterparts, because although each one would receive a much smaller volume, there would be far disproportionately more of them per unit of space because they’re everywhere and they’re more densely packed.
Even if we had no counterparts in an infinite universe who we’d recognize as kin (impossible), I think we’d still generally prefer that the universe be full of whatever life, over it just being us and a whole lot of emptiness.
It’s not as Bad as You Might Think
Humans seem to empathize well with non-human animals. We might find it even easier to empathize with technology-using non-human animals, especially if both sides have technologies that allow them to remake their minds to ease communication and mutual understanding. The original biological parts of ourselves might turn out to be unimportant relative to the parts of ourselves that we build. It might be that most of what we become will be held in common with the other species, and that our small original differences become mutually delightful. Ultimately, the end state would be barely any less happy than if we had populated the entire universe alone.
Or Maybe it’s a Lot Worse than You Might Expect
If you evolve intelligent life enough times, it may become overwhelmingly probable that we’ll get a civilization, or a civilization containing a sufficiently powerful group, that’s interested in annihilating all life everywhere, as far as they can go. It may be totally possible for a so driven civilization to trigger false vacuum decay, or something like that, which would kill many many other civilizations before accelerating cosmological expansion contains it. There is no defense against this sort of “bubble of death”, as Tegmark calls it. If this tends to happen often enough, then there might end up being very little life left, in the end.
It would give us a neat resolution to the Doomsday Argument (edit: As Donald Hobson notes in the comments, no it doesn’t, death bubbles leave us with enough time that there is still a youngness paradox, although this does seem to make it a bit less paradoxical.), if it turned out that life-supporting universes tend to be robust under natural circumstances, but not robust to the technologies of highly developed technological civilizations. It would explain why we find ourselves in this early era, even though we would naively expect most anthropic moments to occur in the much larger civilizations later on.
Ultimately, though, it does not matter whether the abundance of technological life is “good” or “bad”. It begins outside of our lightcone. There is absolutely nothing any of us could have done to prevent it. To experience an impulse to deem such a thing as “good” or “bad” and to feel regretful or elated about it might just be a sort of neurosis.
Regardless, I hope my musings here will be helpful to the en-fleshing of thy eschatology, good reader.