I mean you don’t have to assume a singleton AI becoming very powerful very quickly. You can assume intelligence and friendliness developing in parallel.[and incrementally]
I am talking about SIRI. I mean that human engineers are /will make multiple efforts at simultaneously improving AI and friendliness, and the ecosystem of AIs and AI users are/will select for friendliness that works.
I mean you don’t have to assume a singleton AI becoming very powerful very quickly. You can assume intelligence and friendliness developing in parallel.[and incrementally]
Hmm.
Are you suggesting (super)intelligence would be a result of direct human programming, like Friendliness presumably would be?
Or that Friendliness would be a result of self-modification, like SIAI is predicted to be ’round these parts?
I am talking about SIRI. I mean that human engineers are /will make multiple efforts at simultaneously improving AI and friendliness, and the ecosystem of AIs and AI users are/will select for friendliness that works.
Is the idea that the network develops at roughly the same rate, with no single entity undergoing a hard takeoff?
Yes.
I what sense I don’t have to assume it? I think singleton AI happens to be a likely scenario and this has little to do with cooperation.
The more alternative scenarios there are, the less likelihood iof the MIRI scenario, and the less need for the MIRI solutiion.
I don’t understand what it has to do with cooperative game theory.