“What Dragons?”, says the lion, “I see no Dragons, only a big empty universe. I am the most mighty thing here.”
Whether or not the Imagined Dragons are real isn’t relevant to the gazelles if there is no solid evidence with which to convince the lions. The lions will do what they will do. Maybe some of the lions do decide to believe in the Dragons, but there is no way to force all of them to do so. The remainder will laugh at the dragon-fearing lions and feast on extra gazelles. Their children will reproduce faster.
Indeed, I am insisting in the three posts that from our perspective, this is the crucial point: Fermi’s paradox.
Now there is a whole ecosystem of concepts surrounding it, and although I have certain preferred models, the point is that uncertainty is reallyheavy.
Those AI-lions are cosmical lions thinking on cosmical scales.
Is it easy to detect an AI-Dragon you may meet in millions/billions of years?
Is it undecidable? Probably. For many reasons*
Is this [astronomical level of uncertainty/undecidability + the maximal threat of a death sentence] worth the gamble?
-> “Meeting a stronger AI” = “death”
-> Maximization = 0
-> AI only needs 1 stronger AI to be dead.
What is the likelihood for a human-made AI to not encounter [a stronger alien AI], during the whole length of their lifetime?
*(reachable but rare and far in space-time Dragons, but also cases where Dragons are everywhere and so advanced that lower technological proficiency isn’t enough etc.).
I can’t be certain of the solidity of this uncertainty, and think we still have to be careful, but overall, the most parsimonious prediction to me seems to be super-coordination.
Compared to the risk of facing a revengeful super-cooperative alliance, is the price of maintaining humans in a small blooming “island”, really that high?
Many other-than-human atoms are lions’ prey.
And a doubtful AI may not optimize fully for super-cooperation, simply alleviating the price to pay in the counterfactuals where they encounter a super-cooperative cluster (resulting in a non apocalyptic yet non utopian scenario for us).
I’m aware it looks like a desperate search for each possible hopeful solution but I came to these conclusions by weighting diverse good-and/or-bad-for-us outcomes. I don’t want to ignore those evidences under the pretext that it looks naive.
It’s not a mere belief about aliens, it’s not about being nice, it’s plain logic
Also:
We may hardcode a prior of deep likelihood to meet stronger agents? (Or even to “act as if observed by a stronger agent”)
{causal power of known agents} < {causal power of unknown future agents} + unknown agents will become known agents > unknown agents stay unknown
So coding a sense that: “Stronger allies/ennemies with stronger causal power will certainly be encountered”
There are Dragons that can kill lions.
So the rational lion needs to find the most powerful alliance, with as many creatures as possible, to have protection against Dragons.
There is no alliance with more potential/actual members than the super-cooperative alliance
“What Dragons?”, says the lion, “I see no Dragons, only a big empty universe. I am the most mighty thing here.”
Whether or not the Imagined Dragons are real isn’t relevant to the gazelles if there is no solid evidence with which to convince the lions. The lions will do what they will do. Maybe some of the lions do decide to believe in the Dragons, but there is no way to force all of them to do so. The remainder will laugh at the dragon-fearing lions and feast on extra gazelles. Their children will reproduce faster.
Indeed, I am insisting in the three posts that from our perspective, this is the crucial point:
Fermi’s paradox.
Now there is a whole ecosystem of concepts surrounding it, and although I have certain preferred models, the point is that uncertainty is really heavy.
Those AI-lions are cosmical lions thinking on cosmical scales.
Is it easy to detect an AI-Dragon you may meet in millions/billions of years?
Is it undecidable? Probably. For many reasons*
Is this [astronomical level of uncertainty/undecidability + the maximal threat of a death sentence] worth the gamble?
-> “Meeting a stronger AI” = “death”
-> Maximization = 0
-> AI only needs 1 stronger AI to be dead.
What is the likelihood for a human-made AI to not encounter [a stronger alien AI], during the whole length of their lifetime?
*(reachable but rare and far in space-time Dragons, but also cases where Dragons are everywhere and so advanced that lower technological proficiency isn’t enough etc.).
I can’t be certain of the solidity of this uncertainty, and think we still have to be careful, but overall, the most parsimonious prediction to me seems to be super-coordination.
Compared to the risk of facing a revengeful super-cooperative alliance, is the price of maintaining humans in a small blooming “island”, really that high?
Many other-than-human atoms are lions’ prey.
And a doubtful AI may not optimize fully for super-cooperation, simply alleviating the price to pay in the counterfactuals where they encounter a super-cooperative cluster (resulting in a non apocalyptic yet non utopian scenario for us).
I’m aware it looks like a desperate search for each possible hopeful solution but I came to these conclusions by weighting diverse good-and/or-bad-for-us outcomes. I don’t want to ignore those evidences under the pretext that it looks naive.
It’s not a mere belief about aliens, it’s not about being nice, it’s plain logic
Also:
We may hardcode a prior of deep likelihood to meet stronger agents?
(Or even to “act as if observed by a stronger agent”)
{causal power of known agents} < {causal power of unknown future agents}
+
unknown agents will become known agents > unknown agents stay unknown
So coding a sense that:
“Stronger allies/ennemies with stronger causal power will certainly be encountered”