Grabby Aliens could be Good, could be Bad
Robin Hanson’s Grabby Aliens is a succinct model of how and when technological life will spread. It argues that we’ve simply arrived too early in the universe’s lifespan for other civilizations to have grown to the point of being visible to us yet, but they are there, and eventually, most of us will give rise to large enough civilizations that we’ll start to run into each other.
The paper gave me the impression that this was kind of going to be a bad thing, for us, because it means there will be these rapacious colonizers penning us in on every side and forbidding us from rapaciously colonizing as much of the accessible universe as we otherwise would have liked to. I will argue that having neighbors might actually be really good, and then I will argue that it may also be extremely bad, in a way that I don’t think the article touched on.
This could be Very Good
It may be easier to travel at light speed if someone has already built a radio receiver at the other side of the accessible universe, you can travel as a signal, instead of having to physically boost a signal receiver (and programmable nanomanufacturer) around and somehow decelerate it once it arrives so that it lands in tact on a material source. Moving as signals might be much easier than moving physical objects.
Unless there is some other way of traveling through a dead universe at light speed that doesn’t require moving physical objects?
Is it possible to shoot a laser just right so that if it hits a certain common kind of mineral it can cause the matter to melt and oscillate into a form that can then receive a more subtle sort of signal that will cause it to melt and oscillate into a form that can eventually produce a form that receives high bandwidth signal that can manufacture an entire person?
And, what are the most complex chemicals that would survive a light speed collision with a suitable substrate? Could we supply the reaction with our own reagents? Could we hail down a searing heavenfall of primordial soup?
There was a great-soul, a Jesuit priest, and a Deist (by my reckoning), Teilhard de Chardin, who wrote of the Omega Point, essentially he was describing technological singularity. “He imagined that once the Omega Point had been reached, life might finally move off the planet in an explosive beam of light, heading out to colonize the universe.” I learned of this after musing of this searing heavenfall. I felt the bonds of recurring invention linking our thoughtforms.
The more I think about those things the more plausible it seems that this would definitely be within reach of superintelligence.
arch1 from shtetloptimized says
I seem to recall that in his book Life 3.0, Max Tegmark speculated that radio transmissions may be the fastest way for an aggessive civilization to expand, with an expansion speed not far from c. The idea is that expanding civilization E sends messages which fool a (newcomer) receiving civilization R into building something, which turns out to seed a new center of expansion for E (and I don’t think it ends well for R, though perhaps that is my embellishment).
It could be argued that we (for a particular sense of ‘we’) actually receive more territory as a result of an abundance of life, a universe teeming with life will have more regions allocated to humanlike species than one where life is extremely rare and regularly runs up against its affectability boundaries, which would leave most of space empty!
To put it another way, as is covered in that paper, The Edges of Our Universe, if we were to have the universe all to ourselves, this would not mean that we’d get to colonize the entire thing!
The maximum theoretical space we can cover before accelerating cosmological expansion makes further travel impossible would actually only cover about 20 billion galaxies! If we did assume that life was so rare that their affectability volumes barely ever overlap, that would end up leaving most of space devoid of intelligent life, which is probably not to human preference! A universe where life is abundant would actually allocate more space to humans and their counterparts, because although each one would receive a much smaller volume, there would be far disproportionately more of them per unit of space because they’re everywhere and they’re more densely packed.Even if we had no counterparts in an infinite universe who we’d recognize as kin (impossible), I think we’d still generally prefer that the universe be full of whatever life, over it just being us and a whole lot of emptiness.
It’s not as Bad as You Might Think
Humans seem to empathize well with non-human animals. We might find it even easier to empathize with technology-using non-human animals, especially if both sides have technologies that allow them to remake their minds to ease communication and mutual understanding. The original biological parts of ourselves might turn out to be unimportant relative to the parts of ourselves that we build. It might be that most of what we become will be held in common with the other species, and that our small original differences become mutually delightful. Ultimately, the end state would be barely any less happy than if we had populated the entire universe alone.
Or Maybe it’s a Lot Worse than You Might Expect
If you evolve intelligent life enough times, it may become overwhelmingly probable that we’ll get a civilization, or a civilization containing a sufficiently powerful group, that’s interested in annihilating all life everywhere, as far as they can go. It may be totally possible for a so driven civilization to trigger false vacuum decay, or something like that, which would kill many many other civilizations before accelerating cosmological expansion contains it. There is no defense against this sort of “bubble of death”, as Tegmark calls it. If this tends to happen often enough, then there might end up being very little life left, in the end.
It would give us a neat resolution to the Doomsday Argument (edit: As Donald Hobson notes in the comments, no it doesn’t, death bubbles leave us with enough time that there is still a youngness paradox, although this does seem to make it a bit less paradoxical.), if it turned out that life-supporting universes tend to be robust under natural circumstances, but not robust to the technologies of highly developed technological civilizations. It would explain why we find ourselves in this early era, even though we would naively expect most anthropic moments to occur in the much larger civilizations later on.
Ultimately, though, it does not matter whether the abundance of technological life is “good” or “bad”. It begins outside of our lightcone. There is absolutely nothing any of us could have done to prevent it. To experience an impulse to deem such a thing as “good” or “bad” and to feel regretful or elated about it might just be a sort of neurosis.
Regardless, I hope my musings here will be helpful to the en-fleshing of thy eschatology, good reader.
- 7 Mar 2022 1:33 UTC; 2 points) 's comment on Implications of the Grabby Aliens Model by (
Okay, no, the Teilhardian laser-as-nanomanufacturer idea is probably not workable. I read an extremely basic article about laser attenuation and, bad news: lasers attenuate.
The best a laser could do to any of the planets about the nearest star seems to be making a pulse of somewhat bright light visible to all of them.
I still wonder about sending packets of resilient self-organizing material that could survive a landing, though.
Yep. There’s hints that you might be able to alleviate this somewhat with a very powerful laser (vacuum self-focusing is arguably a thing[1], although it hasn’t been observed thus far I don’t believe), but good luck getting the accuracy necessary to do anything with it beyond signaling.
(Ditto, a Bessel-beam arguably doesn’t attenuate… but requires infinite energy and beamwidth. Finite approximations do start attenuating eventually.)
See e.g. https://arxiv.org/pdf/hep-ph/0611133.pdf
I don’t think there are enough stars in the universe for that.
It would be worth writing, yeah. It would be an update for me.
P(any civilization in its early computing stage will run any code that is sent to them) ≈ 1 for me, not sure about the other terms. Transmission would also require that a civilization within the broadcast radius enters its computer age, and notices the message, before they mature and stop being vulnerable to being hacked, all before that region of space is colonized by a grabby civ (Oh, note, though, this model of spread, if it is practical, we might be able to assume that grabby civs can’t otherwise expand at relativistic speeds, so that buys us some time before colonization blankets that region of space and stops these vulnerable ages from arising, though I’m not sure how much time that buys us.)
Interesting that the attacker they end up noticing would be fairly random, less to do with who is closest, more to do with which segment of the sky they happen to scrutinize first.
There are 2 possible cheats I can think of to attenuating lasers.
Firstly, attenuation depends on radius of the emitter. If you have a 100ly bubble of your tech, it should in principle be possible to do high precision laser stuff 200ly away. A whole bunch of lasers across your bubble, tuned to interfere in just the right way.
Secondly quantum entanglement. You can’t target one photon precisely, but can you ensure 2 photons go in precisely the same direction as each other?
A beamed mind is vulnerable. You send your mind into the grasp of unknown aliens and they can do whatever they like. Do you want to trust the aliens to be nice?
For travel through neighboring grabby civs, mm, I guess you’d want to get to know them first. Are there ways they could prove that they’re a certain kind of civ, with a certain trusted computing model, that lets them prove that they wont leak you?
For travel through neighboring primitive civs in the vulnerable stage… Maybe you’d send a warrior emissary who doesn’t attribute negative utility to any of its own states of mind. If it’s successful… Hmm… it establishes an encryption protocol with home, and only then do you start sending softer minds.
But that would all take a long time. I wonder if there’d be a way of sending it with the encryption protocol already determined (so it could start accepting your minds without having to send you a public key first), in such a way that it would provably only be able to decrypt later messages if it conquered the target system successfully, maybe this protocol would require it to make use of more resources to compute the keys than it would be worth spending for the adversary, if they wanted to extort you. 5 years of multiple stars running hashers.
Might not be the most profitable approach.
Maybe a mindpattern that elegantly mixes suffering-proof eudaimonia generation with the production of proofs of conquest.
This post is relevant, and has more to say about the benefits of neighbors in approaching lightspeed travel https://www.lesswrong.com/posts/DWHkxqX4t79aThDkg/my-current-thoughts-on-the-risks-from-seti#Alien_expansion_and_contact
Apparently there’s an armstrong—sandberg paper that found that getting 99% of lightspeed is totally feasible with coil guns. So the benefits are mild.
I suspect there infinitely many copies of each of our minds spread throughout the Omniverse (or certainly more than a hundred).
These minds have identical experiences, but may live under different laws of physics without knowing it. A lucky minority must live in universes where vacuum decay is impossible, including almost all of our distant descendants.
But it is worrying and unpleasant that we seem to live so close to the beginning of time rather than an endless utopia—almost as if that won’t happen at all. The only solution may be that young universes are somehow constantly being generated within older universes.
Vacuum decay isn’t enough to get us to be here. Even if aliens appear lots, and all want vacuum decay, if we don’t, we can still expect millions of years before it hits. In a million years, a Dyson sphere can hold a huge number of humans. (Even more with mind uploading). Ergo us being this early is still a surprise.
What you need to make our position fairly typical is either descendants who run lots of ancestor sims, or others who sim us. Or us being utterly doomed to destroy ourselves. The only serious candidate for something this doomy is UFAI.