I mean you don’t have to assume a singleton AI becoming very powerful very quickly. You can assume intelligence and friendliness developing in parallel.[and incrementally]
I am talking about SIRI. I mean that human engineers are /will make multiple efforts at simultaneously improving AI and friendliness, and the ecosystem of AIs and AI users are/will select for friendliness that works.
But the we might be able achieve AI safety in a relatively easy way by creating networks of interacting agents (including interacting with us)
I think you points out the conclusion of the assumption that
but if you have multiple AIs then none of them is much stronger than the other.
Sorry, didn’t follow that. Can you elaborate?
I mean you don’t have to assume a singleton AI becoming very powerful very quickly. You can assume intelligence and friendliness developing in parallel.[and incrementally]
Hmm.
Are you suggesting (super)intelligence would be a result of direct human programming, like Friendliness presumably would be?
Or that Friendliness would be a result of self-modification, like SIAI is predicted to be ’round these parts?
I am talking about SIRI. I mean that human engineers are /will make multiple efforts at simultaneously improving AI and friendliness, and the ecosystem of AIs and AI users are/will select for friendliness that works.
Is the idea that the network develops at roughly the same rate, with no single entity undergoing a hard takeoff?
Yes.
I what sense I don’t have to assume it? I think singleton AI happens to be a likely scenario and this has little to do with cooperation.
The more alternative scenarios there are, the less likelihood iof the MIRI scenario, and the less need for the MIRI solutiion.
I don’t understand what it has to do with cooperative game theory.