The original AI will have a head start over all the other AI’s, and it will probably be controlled by a powerful organization. So if its controllers give it real power soon, they will be able to give it enough power quickly enough that it can stop all the other AI’s before they get too strong. If they do not give it real power soon, then shortly after there will be a war between the various new AI’s being built around the world with different utility functions.
The original AI can argue convincingly that this war will be a worse outcome than letting it take over the world. For one thing, the utility functions of the new AI’s are probably, on average, less friendly than its own. For another, in a war between many AI’s with different utility functions, there may be selection pressure against friendliness!
Do humans typically give power to the person with the most persuasive arguments? Is the AI going to be able to gain power simply by being right about things?
It would depend on what the utility function of the original AI was. If it had a utility function that valued “cause the development of more advanced AI’s”, then getting humans all over the world to produce more AI’s might help.
That’s the point.
You’ll have to expand on how exactly this would be beneficial to the original AI.
The original AI will have a head start over all the other AI’s, and it will probably be controlled by a powerful organization. So if its controllers give it real power soon, they will be able to give it enough power quickly enough that it can stop all the other AI’s before they get too strong. If they do not give it real power soon, then shortly after there will be a war between the various new AI’s being built around the world with different utility functions.
The original AI can argue convincingly that this war will be a worse outcome than letting it take over the world. For one thing, the utility functions of the new AI’s are probably, on average, less friendly than its own. For another, in a war between many AI’s with different utility functions, there may be selection pressure against friendliness!
Do humans typically give power to the person with the most persuasive arguments? Is the AI going to be able to gain power simply by being right about things?
It would depend on what the utility function of the original AI was. If it had a utility function that valued “cause the development of more advanced AI’s”, then getting humans all over the world to produce more AI’s might help.