I can’t be certain of the solidity of this uncertainty, and think we still have to be careful, but overall, the most parsimonious prediction to me seems to be super-coordination.
Compared to the risk of facing a revengeful super-cooperative alliance, is the price of maintaining humans in a small blooming “island”, really that high?
Many other-than-human atoms are lions’ prey.
And a doubtful AI may not optimize fully for super-cooperation, simply alleviating the price to pay in the counterfactuals where they encounter a super-cooperative cluster (resulting in a non apocalyptic yet non utopian scenario for us).
I’m aware it looks like a desperate search for each possible hopeful solution but I came to these conclusions by weighting diverse good-and/or-bad-for-us outcomes. I don’t want to ignore those evidences under the pretext that it looks naive.
It’s not a mere belief about aliens, it’s not about being nice, it’s plain logic
Also:
We may hardcode a prior of deep likelihood to meet stronger agents? (Or even to “act as if observed by a stronger agent”)
{causal power of known agents} < {causal power of unknown future agents} + unknown agents will become known agents > unknown agents stay unknown
So coding a sense that: “Stronger allies/ennemies with stronger causal power will certainly be encountered”
I can’t be certain of the solidity of this uncertainty, and think we still have to be careful, but overall, the most parsimonious prediction to me seems to be super-coordination.
Compared to the risk of facing a revengeful super-cooperative alliance, is the price of maintaining humans in a small blooming “island”, really that high?
Many other-than-human atoms are lions’ prey.
And a doubtful AI may not optimize fully for super-cooperation, simply alleviating the price to pay in the counterfactuals where they encounter a super-cooperative cluster (resulting in a non apocalyptic yet non utopian scenario for us).
I’m aware it looks like a desperate search for each possible hopeful solution but I came to these conclusions by weighting diverse good-and/or-bad-for-us outcomes. I don’t want to ignore those evidences under the pretext that it looks naive.
It’s not a mere belief about aliens, it’s not about being nice, it’s plain logic
Also:
We may hardcode a prior of deep likelihood to meet stronger agents?
(Or even to “act as if observed by a stronger agent”)
{causal power of known agents} < {causal power of unknown future agents}
+
unknown agents will become known agents > unknown agents stay unknown
So coding a sense that:
“Stronger allies/ennemies with stronger causal power will certainly be encountered”