I’m not trying to resolve the Fermi problem. I’m pointing out alien UFAIs should be more visible than alien FAIs, and therefore their apparent absence is more remarkable.
Since we’re talking about alien value systems in the first place, we shouldn’t talk as though any of these is ‘Friendly’ from our perspective. The question seems to be whether a random naturally selected value set is more or less likely than a random artificial unevolved value set to reshape large portions of galaxies. Per the Convergence Of Instrumental Goals thesis, we should expect almost any optimizing superintelligence to be hungry enough to eat as much as it can. So the question is whether the rare exceptions to this rule are disproportionately on the naturally selected side.
That seems plausible to me. Random artificial intelligences are only constrained by the physical complexity of their source code, whereas evolvable values have a better-than-chance probability of having terminal values like Exercise Restraint and Don’t Eat All The Resources and Respect Others’ Territory. If a monkey coding random utility functions on a typewriter is less likely than evolution to hit on something that intrinsically values Don’t Fuck With Very Much Of The Universe, then friendly-to-evolved-alien-values AI is more likely than unfriendly-to-evolved-alien-values AI to yield a Fermi Paradox.
Agreed, but if both eat galaxies with very high probability, it’s still a bit of a lousy explanation. Like, if it were the only explanation we’d have to go with that update, but it’s more likely we’re confused.
Agreed. The Fermi Paradox increases the odds that AIs can be programmed to satisfy naturally selected values, a little bit. But this hypothesis, that FAI is easy relative to UFAI, does almost nothing to explain the Paradox.
They should be very, very slightly less visible (they will have slightly fewer resources to use due to expending some on keeping their parent species happy, and FAI is more likely to have a utility function that intentionally keeps itself invisible to intelligent life than UFAI, even though that probability is still very small), but this difference is negligible. Their apparent absence is not significantly more remarkable, in comparison to the total remarkability of the absence of any form of highly intelligent extra-terrestrial life.
I’m not trying to resolve the Fermi problem. I’m pointing out alien UFAIs should be more visible than alien FAIs, and therefore their apparent absence is more remarkable.
We understand you are saying that. Nobody except you believes it, for the good reasons given in many responses.
Since we’re talking about alien value systems in the first place, we shouldn’t talk as though any of these is ‘Friendly’ from our perspective. The question seems to be whether a random naturally selected value set is more or less likely than a random artificial unevolved value set to reshape large portions of galaxies. Per the Convergence Of Instrumental Goals thesis, we should expect almost any optimizing superintelligence to be hungry enough to eat as much as it can. So the question is whether the rare exceptions to this rule are disproportionately on the naturally selected side.
That seems plausible to me. Random artificial intelligences are only constrained by the physical complexity of their source code, whereas evolvable values have a better-than-chance probability of having terminal values like Exercise Restraint and Don’t Eat All The Resources and Respect Others’ Territory. If a monkey coding random utility functions on a typewriter is less likely than evolution to hit on something that intrinsically values Don’t Fuck With Very Much Of The Universe, then friendly-to-evolved-alien-values AI is more likely than unfriendly-to-evolved-alien-values AI to yield a Fermi Paradox.
Agreed, but if both eat galaxies with very high probability, it’s still a bit of a lousy explanation. Like, if it were the only explanation we’d have to go with that update, but it’s more likely we’re confused.
Agreed. The Fermi Paradox increases the odds that AIs can be programmed to satisfy naturally selected values, a little bit. But this hypothesis, that FAI is easy relative to UFAI, does almost nothing to explain the Paradox.
They should be very, very slightly less visible (they will have slightly fewer resources to use due to expending some on keeping their parent species happy, and FAI is more likely to have a utility function that intentionally keeps itself invisible to intelligent life than UFAI, even though that probability is still very small), but this difference is negligible. Their apparent absence is not significantly more remarkable, in comparison to the total remarkability of the absence of any form of highly intelligent extra-terrestrial life.