But I don’t think the Grabby/Loud Aliens argument actually explains my, Lorec’s, earliness in an anthropic sense, given the assumption that future aliens will also be populous and sentient.
There is no assumption that grabby aliens will be sentient in Hanson’s model. They only prevent other sentient civilizations from appearing.
You could make a Grabby Aliens argument without assuming alien sentience, and in fact Hanson doesn’t always explicitly state this assumption. However, as far as I understand Hanson’s world-model, he does indeed believe these alien civilizations [and the successors of humanity] will by default be sentient.
If you did make a Grabby Aliens argument that did not assume alien sentience, it would still have the additional burden of explaining why successful alien civilizations [which come later] are nonsentient, while sentient human civilization [which is early and gets wiped out soon by aliens] is not so successful. It does not seem to make very much sense to model our strong rivals as, most frequently, versions of us with the sentience cut out.
If you assume sentience cut-off as a problem, it boils down to the Doomsday argument: why are we so early in the history of humanity? Maybe our civilization becomes non-sentient after the 21st century, either because of extinction or non-sentient AI takeover.
If we agree with the Doomsday argument here, we should agree that most AI-civilizations are non-sentient. And as most Grabby Aliens are AI-civilizations, they are non-sentient.
TLDR: If we apply anthropics to the location of sentience in time, we should assume that Grabby Aliens are non-sentient, and thus the Grabby Alien argument is not affected by the earliness of our sentience.
And given that the earliness of our sentience is the very thing the Grabby Aliens argument is supposed to explain, I think this non-dependence is damning for it.
However, we can expect that any civilization is sentient only for a short time in its development, analogous to the 19th-21st centuries. After that, it becomes controlled by non-sentient AI. Thus, it’s not surprising that aliens are not sentient during their grabby stage.
But one may argue that even a grabby alien civilization has to pass through a period when it is sentient.
For that, Hanson’s argument may suggest that:
a) All the progenitors of future grabby aliens already exist now (maybe we will become grabby)
b) Future grabby aliens destroy any possible civilization before it reaches the sentient stage in the remote future.
Thus, the only existing sentient civilizations are those that exist in the early stage of the universe.
I imagine that nonsentient replicators could reproduce and travel through the universe faster than sentient ones, and speed is crucial for the Grabby Aliens argument.
You probably need sentience to figure out space travel, but once you get that done, maybe the universe is sufficiently regular that you can just follow the same relatively simple instructions over and over again. And even if occasionally you meet an irregularity, such as an intelligent technologically advanced civilization that changed something about their part of the universe, the flood will just go around them, consuming all the resources in their neighborhood, and probably hurting them in the process a lot even if they succeed to survive.
Okay, but why would someone essentially burn down the entire universe? First, we don’t know what kind of utility function do the aliens have. Maybe they value something (paperclips? holy symbols?) way more than sentience. Or maybe they are paranoid about potential enemies, and burning down the rest of the universe seems like a reasonable defense to them. Second, it could happen as an accident; with billions of space probes across the universe, random mutations may happen, and the mutants that lost sentience but gained a little speed would outcompete the probes that follow the originally intended design.
it could happen as an accident; with billions of space probes across the universe, random mutations may happen, and the mutants that lost sentience but gained a little speed would outcompete the probes that follow the originally intended design.
This is, indeed, what I meant by “nonsentient mesa-optimizers” in OP:
Why do you expect sentience to be a barrier to space travel in particular, and not interstellar warfare? Interstellar warfare with an intelligent civilization seems much harder than merely launching your von Neumann probe into space.
I agree with you that “civilizations get swept by nonsentient mesa-optimizers” is anthropically frequent. I think this resolves the Doomsday Argument problem. Hanson’s position is different from both mine and yours.
Good question! I didn’t actually think about this consciously, but I guess my intuitive assumption is that the sufficiently advanced civilizations are strongly constrained by the laws of physics, which are the same for everyone, regardless of their intelligence.
A genius human inventor living in 21st century could possibly invent a car that is 10x faster than any other car invented so far, or maybe a rocket that is 100x faster than any other rocket. But if an alien civilization already has spaceships that fly at 0.9c, the best their super-minds can do is to increase it to 0.99c, or maybe 0.999c, but even if they get to 0.999999c, it won’t make much of a difference if two civilizations on the opposite sides of the galaxy send their bombs at each other.
Similarly, human military in 21st century could invent more powerful bombs, and then maybe better bunkers, and then even more powerful bombs, etc. So the more intelligent side can keep an advantage. But if the alien civilizations invent bombs that can blow up stars, or create black holes and throw them at your solar system, there is probably not much you can do about it. Especially, if they won’t only launch the bombs at you, but also at all solar systems around you. Suppose you survive the attack, but all stars within 1000 light years around you are destroyed. How will your civilization advance now? You need energy, and there is a limit on how much energy you can extract from a star; and if you have no more stars, you are out of energy.
Once you get the “theory of everything” and develop technology to exploit nature at that level, there is nowhere further to go. The speed of light is finite. The matter in your light solar system is finite; the amount of energy you can extract from it is finite. If you get e.g. to 1% of the fundamental limits, it means that no invention ever can make you 100x more efficient than you are now. Which means that a civilization that starts with 100x more resources (because they started expanding through the universe earlier, or sacrificed more to become faster) will crush you.
This is not a proof, one could possibly argue that the smarter civilization would e.g. try to escape instead of fighting, or that there must be a way to overcome what seems like the fundamental limitations of physics, like maybe even create your own parallel universe and escape there. But, this is my intuition about how things would work on the galactic scale. If someone throws a sufficient amount of black holes at you, or strips bare all the resources around you, it’s game over even for the Space Einstein.
The level where there is suddenly “nowhere further to go”—the switch from exciting, meritocratic “Space Race” mode to boring, stagnant “Culture” mode—isn’t dependent on whether you’ve overcome any particular physical boundary. It’s dependent on whether you’re still encountering meaningful enemies to grind against, or not. If your civilization first got the optimization power to get into space by, say, a cutthroat high-speed Internet market [on an alternate Earth this could have been what happened], the market for high-speed Internet isn’t going to stop being cutthroat enough to encourage innovation just because people are now trying to cover parcels of 3D space instead of areas of 2D land. And even in stagnant “Culture” mode, I don’t see why [members/branches of] your civilization would choose to get dumber [lose sentience or whatever other abilities got them into space].
Suppose you survive the attack, but all stars within 1000 light years around you are destroyed.
I question why you assign significant probability to this outcome in particular?
Have you read That Alien Message? A truly smart civilization has ways of intercepting asteroids before they hit, if they’re sufficiently dumb/slow—even ones that are nominally really physically powerful.
There is no assumption that grabby aliens will be sentient in Hanson’s model. They only prevent other sentient civilizations from appearing.
You could make a Grabby Aliens argument without assuming alien sentience, and in fact Hanson doesn’t always explicitly state this assumption. However, as far as I understand Hanson’s world-model, he does indeed believe these alien civilizations [and the successors of humanity] will by default be sentient.
If you did make a Grabby Aliens argument that did not assume alien sentience, it would still have the additional burden of explaining why successful alien civilizations [which come later] are nonsentient, while sentient human civilization [which is early and gets wiped out soon by aliens] is not so successful. It does not seem to make very much sense to model our strong rivals as, most frequently, versions of us with the sentience cut out.
If you assume sentience cut-off as a problem, it boils down to the Doomsday argument: why are we so early in the history of humanity? Maybe our civilization becomes non-sentient after the 21st century, either because of extinction or non-sentient AI takeover.
If we agree with the Doomsday argument here, we should agree that most AI-civilizations are non-sentient. And as most Grabby Aliens are AI-civilizations, they are non-sentient.
TLDR: If we apply anthropics to the location of sentience in time, we should assume that Grabby Aliens are non-sentient, and thus the Grabby Alien argument is not affected by the earliness of our sentience.
Agreed.
And given that the earliness of our sentience is the very thing the Grabby Aliens argument is supposed to explain, I think this non-dependence is damning for it.
Thanks, now I better understand your argument.
However, we can expect that any civilization is sentient only for a short time in its development, analogous to the 19th-21st centuries. After that, it becomes controlled by non-sentient AI. Thus, it’s not surprising that aliens are not sentient during their grabby stage.
But one may argue that even a grabby alien civilization has to pass through a period when it is sentient.
For that, Hanson’s argument may suggest that:
a) All the progenitors of future grabby aliens already exist now (maybe we will become grabby)
b) Future grabby aliens destroy any possible civilization before it reaches the sentient stage in the remote future.
Thus, the only existing sentient civilizations are those that exist in the early stage of the universe.
I imagine that nonsentient replicators could reproduce and travel through the universe faster than sentient ones, and speed is crucial for the Grabby Aliens argument.
You probably need sentience to figure out space travel, but once you get that done, maybe the universe is sufficiently regular that you can just follow the same relatively simple instructions over and over again. And even if occasionally you meet an irregularity, such as an intelligent technologically advanced civilization that changed something about their part of the universe, the flood will just go around them, consuming all the resources in their neighborhood, and probably hurting them in the process a lot even if they succeed to survive.
Okay, but why would someone essentially burn down the entire universe? First, we don’t know what kind of utility function do the aliens have. Maybe they value something (paperclips? holy symbols?) way more than sentience. Or maybe they are paranoid about potential enemies, and burning down the rest of the universe seems like a reasonable defense to them. Second, it could happen as an accident; with billions of space probes across the universe, random mutations may happen, and the mutants that lost sentience but gained a little speed would outcompete the probes that follow the originally intended design.
This is, indeed, what I meant by “nonsentient mesa-optimizers” in OP:
Why do you expect sentience to be a barrier to space travel in particular, and not interstellar warfare? Interstellar warfare with an intelligent civilization seems much harder than merely launching your von Neumann probe into space.
I agree with you that “civilizations get swept by nonsentient mesa-optimizers” is anthropically frequent. I think this resolves the Doomsday Argument problem. Hanson’s position is different from both mine and yours.
Good question! I didn’t actually think about this consciously, but I guess my intuitive assumption is that the sufficiently advanced civilizations are strongly constrained by the laws of physics, which are the same for everyone, regardless of their intelligence.
A genius human inventor living in 21st century could possibly invent a car that is 10x faster than any other car invented so far, or maybe a rocket that is 100x faster than any other rocket. But if an alien civilization already has spaceships that fly at 0.9c, the best their super-minds can do is to increase it to 0.99c, or maybe 0.999c, but even if they get to 0.999999c, it won’t make much of a difference if two civilizations on the opposite sides of the galaxy send their bombs at each other.
Similarly, human military in 21st century could invent more powerful bombs, and then maybe better bunkers, and then even more powerful bombs, etc. So the more intelligent side can keep an advantage. But if the alien civilizations invent bombs that can blow up stars, or create black holes and throw them at your solar system, there is probably not much you can do about it. Especially, if they won’t only launch the bombs at you, but also at all solar systems around you. Suppose you survive the attack, but all stars within 1000 light years around you are destroyed. How will your civilization advance now? You need energy, and there is a limit on how much energy you can extract from a star; and if you have no more stars, you are out of energy.
Once you get the “theory of everything” and develop technology to exploit nature at that level, there is nowhere further to go. The speed of light is finite. The matter in your light solar system is finite; the amount of energy you can extract from it is finite. If you get e.g. to 1% of the fundamental limits, it means that no invention ever can make you 100x more efficient than you are now. Which means that a civilization that starts with 100x more resources (because they started expanding through the universe earlier, or sacrificed more to become faster) will crush you.
This is not a proof, one could possibly argue that the smarter civilization would e.g. try to escape instead of fighting, or that there must be a way to overcome what seems like the fundamental limitations of physics, like maybe even create your own parallel universe and escape there. But, this is my intuition about how things would work on the galactic scale. If someone throws a sufficient amount of black holes at you, or strips bare all the resources around you, it’s game over even for the Space Einstein.
The level where there is suddenly “nowhere further to go”—the switch from exciting, meritocratic “Space Race” mode to boring, stagnant “Culture” mode—isn’t dependent on whether you’ve overcome any particular physical boundary. It’s dependent on whether you’re still encountering meaningful enemies to grind against, or not. If your civilization first got the optimization power to get into space by, say, a cutthroat high-speed Internet market [on an alternate Earth this could have been what happened], the market for high-speed Internet isn’t going to stop being cutthroat enough to encourage innovation just because people are now trying to cover parcels of 3D space instead of areas of 2D land. And even in stagnant “Culture” mode, I don’t see why [members/branches of] your civilization would choose to get dumber [lose sentience or whatever other abilities got them into space].
I question why you assign significant probability to this outcome in particular?
Have you read That Alien Message? A truly smart civilization has ways of intercepting asteroids before they hit, if they’re sufficiently dumb/slow—even ones that are nominally really physically powerful.