Even if somehow being a good person meant you could only go at 0.99999c instead of 0.999999c, the difference from our perspective as to what the night sky should look like is negligible. Details of the utility function should not affect the achievable engineering velocity of a self-replicating intelligent probe.
The Fermi Paradox is a hard problem. This does not mean your suggestion is the only idea anyone will ever think of for resolving it and hence that it must be right even if it appears to have grave difficulties. It means we either haven’t thought of the right idea yet, or that what appear to be difficulties in some existing idea have a resolution we haven’t thought of yet.
What’s your favored hypothesis? Are we the first civilization to have come even this far (filter constrains transitions at some earlier stage, maybe abiogenesis), at least in our “little light corner”? Did others reach this stage but then perish due to x-risks excluding AI (local variants of grey goo, or resource depletion etc.)? Do they hide from us, presenting us a false image of the heavens, like a planetarium? Are the nanobots already on their way, still just a bit out? (Once we send our own wave, I wonder what would happen when those two waves clash.) Are we simulated (and the simulators aren’t interested in interactions with other simulated civilizations)?
Personally, the last hypothesis seems to be the most natural fit. Being the first kids on the block is also not easily dismissible, the universe is still ridiculously young vis-a-vis e.g. how long our very own sol has already been around (13.8 vs. 4.6 billion years), compared to what one might expect.
The only really simple explanation is that life (abiogenesis) is somehow much harder than it looks, or there’s a hard step on the way to mice. Grey goo would not wipe out every single species in a crowded sky, some would be smarter and better-coordinated than that. The untouched sky burning away its negentropy is not what a good mind would do, nor an evil mind either, and the only simple story is that it is empty of life.
Though with all those planets, it might well be a complex story. I just haven’t heard any complex stories that sound obviously right or even really actually plausible.
How hard do you think abiogenesis looks? However much larger than our light-pocket the Universe is, counting many worlds, that’s the width of the range of difficulty it has to be in to account for the Fermi paradox. AIUI that’s a very wide, possibly infinite range, and it doesn’t seem at all implausible to me that it’s in that range. You have a model which would be slightly surprised by finding it that unlikely?
There doesn’t actually have to be one great filter. If there are 40 “little filters” between abiogenesis and “a space-faring intelligence spreading throughout the galaxy”, and at each stage life has a 50% chance of moving past the little filter, then the odds of any one potentially life-supporting planet getting through all 40 filters is only 1 in 2^40, or about one in a trillion, and we probably wouldn’t see any others in our galaxy. Perhaps half of all self-replicating RNA gets to the DNA stage, half of the time that gets up to the prokaryote stage, half of the time that gets to the eukaryote stage, and so on, all the way up through things like “intelligent life form comes up with the idea of science” or “intelligent life form passes through an industrial revolution”. None of the steps have to be all that improbable in an absolute sense, if there are enough of them.
The “little filters” wouldn’t necessarily have to be as devastating as we usually think of in terms of great filters; anything that could knock either evolution or a civilization back so that it had to repeat a couple of other “little filters” would usually be enough. For example, “a civilization getting through it’s first 50 years after the invention of the bomb without a nuclear war” could be a little filter, because even though it might not cause the extinction of the species, it might require a civilization to pass through some other little filters again to get back to that level of technology again, and some percentage might never do that. Same with asteroid strikes, drastic ice ages, ect; anything that sets the clock back on evolution for a while.
(Once we send our own wave, I wonder what would happen when those two waves clash)
Given the vastness of space, they would pass through each other and each compete with the others on a system-by-system basis. Those who got a foothold first would have a strong advantage.
I’m not trying to resolve the Fermi problem. I’m pointing out alien UFAIs should be more visible than alien FAIs, and therefore their apparent absence is more remarkable.
Since we’re talking about alien value systems in the first place, we shouldn’t talk as though any of these is ‘Friendly’ from our perspective. The question seems to be whether a random naturally selected value set is more or less likely than a random artificial unevolved value set to reshape large portions of galaxies. Per the Convergence Of Instrumental Goals thesis, we should expect almost any optimizing superintelligence to be hungry enough to eat as much as it can. So the question is whether the rare exceptions to this rule are disproportionately on the naturally selected side.
That seems plausible to me. Random artificial intelligences are only constrained by the physical complexity of their source code, whereas evolvable values have a better-than-chance probability of having terminal values like Exercise Restraint and Don’t Eat All The Resources and Respect Others’ Territory. If a monkey coding random utility functions on a typewriter is less likely than evolution to hit on something that intrinsically values Don’t Fuck With Very Much Of The Universe, then friendly-to-evolved-alien-values AI is more likely than unfriendly-to-evolved-alien-values AI to yield a Fermi Paradox.
Agreed, but if both eat galaxies with very high probability, it’s still a bit of a lousy explanation. Like, if it were the only explanation we’d have to go with that update, but it’s more likely we’re confused.
Agreed. The Fermi Paradox increases the odds that AIs can be programmed to satisfy naturally selected values, a little bit. But this hypothesis, that FAI is easy relative to UFAI, does almost nothing to explain the Paradox.
They should be very, very slightly less visible (they will have slightly fewer resources to use due to expending some on keeping their parent species happy, and FAI is more likely to have a utility function that intentionally keeps itself invisible to intelligent life than UFAI, even though that probability is still very small), but this difference is negligible. Their apparent absence is not significantly more remarkable, in comparison to the total remarkability of the absence of any form of highly intelligent extra-terrestrial life.
Even if somehow being a good person meant you could only go at 0.99999c instead of 0.999999c, the difference from our perspective as to what the night sky should look like is negligible. Details of the utility function should not affect the achievable engineering velocity of a self-replicating intelligent probe.
The Fermi Paradox is a hard problem. This does not mean your suggestion is the only idea anyone will ever think of for resolving it and hence that it must be right even if it appears to have grave difficulties. It means we either haven’t thought of the right idea yet, or that what appear to be difficulties in some existing idea have a resolution we haven’t thought of yet.
Even if somehow being a good person meant you could only go at 0.01c instead of 0.999999c...
What’s your favored hypothesis? Are we the first civilization to have come even this far (filter constrains transitions at some earlier stage, maybe abiogenesis), at least in our “little light corner”? Did others reach this stage but then perish due to x-risks excluding AI (local variants of grey goo, or resource depletion etc.)? Do they hide from us, presenting us a false image of the heavens, like a planetarium? Are the nanobots already on their way, still just a bit out? (Once we send our own wave, I wonder what would happen when those two waves clash.) Are we simulated (and the simulators aren’t interested in interactions with other simulated civilizations)?
Personally, the last hypothesis seems to be the most natural fit. Being the first kids on the block is also not easily dismissible, the universe is still ridiculously young vis-a-vis e.g. how long our very own sol has already been around (13.8 vs. 4.6 billion years), compared to what one might expect.
The only really simple explanation is that life (abiogenesis) is somehow much harder than it looks, or there’s a hard step on the way to mice. Grey goo would not wipe out every single species in a crowded sky, some would be smarter and better-coordinated than that. The untouched sky burning away its negentropy is not what a good mind would do, nor an evil mind either, and the only simple story is that it is empty of life.
Though with all those planets, it might well be a complex story. I just haven’t heard any complex stories that sound obviously right or even really actually plausible.
How hard do you think abiogenesis looks? However much larger than our light-pocket the Universe is, counting many worlds, that’s the width of the range of difficulty it has to be in to account for the Fermi paradox. AIUI that’s a very wide, possibly infinite range, and it doesn’t seem at all implausible to me that it’s in that range. You have a model which would be slightly surprised by finding it that unlikely?
There doesn’t actually have to be one great filter. If there are 40 “little filters” between abiogenesis and “a space-faring intelligence spreading throughout the galaxy”, and at each stage life has a 50% chance of moving past the little filter, then the odds of any one potentially life-supporting planet getting through all 40 filters is only 1 in 2^40, or about one in a trillion, and we probably wouldn’t see any others in our galaxy. Perhaps half of all self-replicating RNA gets to the DNA stage, half of the time that gets up to the prokaryote stage, half of the time that gets to the eukaryote stage, and so on, all the way up through things like “intelligent life form comes up with the idea of science” or “intelligent life form passes through an industrial revolution”. None of the steps have to be all that improbable in an absolute sense, if there are enough of them.
The “little filters” wouldn’t necessarily have to be as devastating as we usually think of in terms of great filters; anything that could knock either evolution or a civilization back so that it had to repeat a couple of other “little filters” would usually be enough. For example, “a civilization getting through it’s first 50 years after the invention of the bomb without a nuclear war” could be a little filter, because even though it might not cause the extinction of the species, it might require a civilization to pass through some other little filters again to get back to that level of technology again, and some percentage might never do that. Same with asteroid strikes, drastic ice ages, ect; anything that sets the clock back on evolution for a while.
If that was true, we’d expect to find microbial life on a nontrivial number of planets. That’ll be testable in a few years.
Given the vastness of space, they would pass through each other and each compete with the others on a system-by-system basis. Those who got a foothold first would have a strong advantage.
Blob wars! Twist: the blobs are sentient!
What gobbledegook. Or is it goobly goop? The bloobs versus the goops?
I’m not trying to resolve the Fermi problem. I’m pointing out alien UFAIs should be more visible than alien FAIs, and therefore their apparent absence is more remarkable.
We understand you are saying that. Nobody except you believes it, for the good reasons given in many responses.
Since we’re talking about alien value systems in the first place, we shouldn’t talk as though any of these is ‘Friendly’ from our perspective. The question seems to be whether a random naturally selected value set is more or less likely than a random artificial unevolved value set to reshape large portions of galaxies. Per the Convergence Of Instrumental Goals thesis, we should expect almost any optimizing superintelligence to be hungry enough to eat as much as it can. So the question is whether the rare exceptions to this rule are disproportionately on the naturally selected side.
That seems plausible to me. Random artificial intelligences are only constrained by the physical complexity of their source code, whereas evolvable values have a better-than-chance probability of having terminal values like Exercise Restraint and Don’t Eat All The Resources and Respect Others’ Territory. If a monkey coding random utility functions on a typewriter is less likely than evolution to hit on something that intrinsically values Don’t Fuck With Very Much Of The Universe, then friendly-to-evolved-alien-values AI is more likely than unfriendly-to-evolved-alien-values AI to yield a Fermi Paradox.
Agreed, but if both eat galaxies with very high probability, it’s still a bit of a lousy explanation. Like, if it were the only explanation we’d have to go with that update, but it’s more likely we’re confused.
Agreed. The Fermi Paradox increases the odds that AIs can be programmed to satisfy naturally selected values, a little bit. But this hypothesis, that FAI is easy relative to UFAI, does almost nothing to explain the Paradox.
They should be very, very slightly less visible (they will have slightly fewer resources to use due to expending some on keeping their parent species happy, and FAI is more likely to have a utility function that intentionally keeps itself invisible to intelligent life than UFAI, even though that probability is still very small), but this difference is negligible. Their apparent absence is not significantly more remarkable, in comparison to the total remarkability of the absence of any form of highly intelligent extra-terrestrial life.