Indeed, I am insisting in the three posts that from our perspective, this is the crucial point: Fermi’s paradox.
Now there is a whole ecosystem of concepts surrounding it, and although I have certain preferred models, the point is that uncertainty is reallyheavy.
Those AI-lions are cosmical lions thinking on cosmical scales.
Is it easy to detect an AI-Dragon you may meet in millions/billions of years?
Is it undecidable? Probably. For many reasons*
Is this [astronomical level of uncertainty/undecidability + the maximal threat of a death sentence] worth the gamble?
-> “Meeting a stronger AI” = “death”
-> Maximization = 0
-> AI only needs 1 stronger AI to be dead.
What is the likelihood for a human-made AI to not encounter [a stronger alien AI], during the whole length of their lifetime?
*(reachable but rare and far in space-time Dragons, but also cases where Dragons are everywhere and so advanced that lower technological proficiency isn’t enough etc.).
I can’t be certain of the solidity of this uncertainty, and think we still have to be careful, but overall, the most parsimonious prediction to me seems to be super-coordination.
Compared to the risk of facing a revengeful super-cooperative alliance, is the price of maintaining humans in a small blooming “island”, really that high?
Many other-than-human atoms are lions’ prey.
And a doubtful AI may not optimize fully for super-cooperation, simply alleviating the price to pay in the counterfactuals where they encounter a super-cooperative cluster (resulting in a non apocalyptic yet non utopian scenario for us).
I’m aware it looks like a desperate search for each possible hopeful solution but I came to these conclusions by weighting diverse good-and/or-bad-for-us outcomes. I don’t want to ignore those evidences under the pretext that it looks naive.
It’s not a mere belief about aliens, it’s not about being nice, it’s plain logic
Also:
We may hardcode a prior of deep likelihood to meet stronger agents? (Or even to “act as if observed by a stronger agent”)
{causal power of known agents} < {causal power of unknown future agents} + unknown agents will become known agents > unknown agents stay unknown
So coding a sense that: “Stronger allies/ennemies with stronger causal power will certainly be encountered”
Indeed, I am insisting in the three posts that from our perspective, this is the crucial point:
Fermi’s paradox.
Now there is a whole ecosystem of concepts surrounding it, and although I have certain preferred models, the point is that uncertainty is really heavy.
Those AI-lions are cosmical lions thinking on cosmical scales.
Is it easy to detect an AI-Dragon you may meet in millions/billions of years?
Is it undecidable? Probably. For many reasons*
Is this [astronomical level of uncertainty/undecidability + the maximal threat of a death sentence] worth the gamble?
-> “Meeting a stronger AI” = “death”
-> Maximization = 0
-> AI only needs 1 stronger AI to be dead.
What is the likelihood for a human-made AI to not encounter [a stronger alien AI], during the whole length of their lifetime?
*(reachable but rare and far in space-time Dragons, but also cases where Dragons are everywhere and so advanced that lower technological proficiency isn’t enough etc.).
I can’t be certain of the solidity of this uncertainty, and think we still have to be careful, but overall, the most parsimonious prediction to me seems to be super-coordination.
Compared to the risk of facing a revengeful super-cooperative alliance, is the price of maintaining humans in a small blooming “island”, really that high?
Many other-than-human atoms are lions’ prey.
And a doubtful AI may not optimize fully for super-cooperation, simply alleviating the price to pay in the counterfactuals where they encounter a super-cooperative cluster (resulting in a non apocalyptic yet non utopian scenario for us).
I’m aware it looks like a desperate search for each possible hopeful solution but I came to these conclusions by weighting diverse good-and/or-bad-for-us outcomes. I don’t want to ignore those evidences under the pretext that it looks naive.
It’s not a mere belief about aliens, it’s not about being nice, it’s plain logic
Also:
We may hardcode a prior of deep likelihood to meet stronger agents?
(Or even to “act as if observed by a stronger agent”)
{causal power of known agents} < {causal power of unknown future agents}
+
unknown agents will become known agents > unknown agents stay unknown
So coding a sense that:
“Stronger allies/ennemies with stronger causal power will certainly be encountered”