Since we’re talking about alien value systems in the first place, we shouldn’t talk as though any of these is ‘Friendly’ from our perspective. The question seems to be whether a random naturally selected value set is more or less likely than a random artificial unevolved value set to reshape large portions of galaxies. Per the Convergence Of Instrumental Goals thesis, we should expect almost any optimizing superintelligence to be hungry enough to eat as much as it can. So the question is whether the rare exceptions to this rule are disproportionately on the naturally selected side.
That seems plausible to me. Random artificial intelligences are only constrained by the physical complexity of their source code, whereas evolvable values have a better-than-chance probability of having terminal values like Exercise Restraint and Don’t Eat All The Resources and Respect Others’ Territory. If a monkey coding random utility functions on a typewriter is less likely than evolution to hit on something that intrinsically values Don’t Fuck With Very Much Of The Universe, then friendly-to-evolved-alien-values AI is more likely than unfriendly-to-evolved-alien-values AI to yield a Fermi Paradox.
Agreed, but if both eat galaxies with very high probability, it’s still a bit of a lousy explanation. Like, if it were the only explanation we’d have to go with that update, but it’s more likely we’re confused.
Agreed. The Fermi Paradox increases the odds that AIs can be programmed to satisfy naturally selected values, a little bit. But this hypothesis, that FAI is easy relative to UFAI, does almost nothing to explain the Paradox.
We understand you are saying that. Nobody except you believes it, for the good reasons given in many responses.
Since we’re talking about alien value systems in the first place, we shouldn’t talk as though any of these is ‘Friendly’ from our perspective. The question seems to be whether a random naturally selected value set is more or less likely than a random artificial unevolved value set to reshape large portions of galaxies. Per the Convergence Of Instrumental Goals thesis, we should expect almost any optimizing superintelligence to be hungry enough to eat as much as it can. So the question is whether the rare exceptions to this rule are disproportionately on the naturally selected side.
That seems plausible to me. Random artificial intelligences are only constrained by the physical complexity of their source code, whereas evolvable values have a better-than-chance probability of having terminal values like Exercise Restraint and Don’t Eat All The Resources and Respect Others’ Territory. If a monkey coding random utility functions on a typewriter is less likely than evolution to hit on something that intrinsically values Don’t Fuck With Very Much Of The Universe, then friendly-to-evolved-alien-values AI is more likely than unfriendly-to-evolved-alien-values AI to yield a Fermi Paradox.
Agreed, but if both eat galaxies with very high probability, it’s still a bit of a lousy explanation. Like, if it were the only explanation we’d have to go with that update, but it’s more likely we’re confused.
Agreed. The Fermi Paradox increases the odds that AIs can be programmed to satisfy naturally selected values, a little bit. But this hypothesis, that FAI is easy relative to UFAI, does almost nothing to explain the Paradox.