Ok, so lets assume that the alignment work has been done and solved. (Big assumption) I don’t really see this as a game of countries, more a game of teams.
The natural size of the teams is the set of people who have fairly detailed technical knowledge about the AI, and are working together. I suspect that non-technical and unwanted bureaucrats that push their noses into an AI project will get much lip service and little representation in the core utility function.
You would have say an openAI team. In the early stages of covid, a virus was something fairly easy for politicians to understand, and all the virologists had incentive to shout “look at this”. AGI is harder to understand, and the people at openAI have good reason not to draw too much government attention, if they expect the government to be nasty or coercive.
The people at openAI and deepmind are not enemies that want to defeat each other at all costs, some will be personal friends. Most will be after some sort of broadly utopian AI helps humanity future. Most are decent people. I predict neither side will want to bomb the other, even if they have the capability. There may be friendly rivalry or outright cooperation.
Thanks for your comment. This is something I should have stated a bit more explicitly.
When I mentioned “single state (or part thereof)”, the part thereof was referring to these groups or groups in other countries that are yet to be formed.
I think the chance of government intervention is quite high in the slow take-off scenario. It’s quite likely that any group successfully working on AGI will slowly but noticeably start to accumulate a lot of resources. If that cannot be concealed, it will start to attract a lot of attention. I think it is unlikely that the government and state bureaucracy would be content to let such resources accumulate untouched e.g. the current shifting attitude to Big Tech in Brussels and Washington.
In a fast take-off scenario, I think we can frame things more provocatively: the group that develops AGI either becomes the government, or the government takes control while it still can. I’m not sure what the relative probabilities are here, but in both circumstances you end up with something that will act like a state, and be treated as a state by other states, which is why I model them like a state in my analysis. For example, even if OpenAI and DeepMind are friendly to each other, and that persists over decades, I can easily imagine the Chinese state trying to develop an alternative that might not be friendly to those two groups, especially if the Chinese government perceive them as promoting a different model of government.
Ok, so lets assume that the alignment work has been done and solved. (Big assumption) I don’t really see this as a game of countries, more a game of teams.
The natural size of the teams is the set of people who have fairly detailed technical knowledge about the AI, and are working together. I suspect that non-technical and unwanted bureaucrats that push their noses into an AI project will get much lip service and little representation in the core utility function.
You would have say an openAI team. In the early stages of covid, a virus was something fairly easy for politicians to understand, and all the virologists had incentive to shout “look at this”. AGI is harder to understand, and the people at openAI have good reason not to draw too much government attention, if they expect the government to be nasty or coercive.
The people at openAI and deepmind are not enemies that want to defeat each other at all costs, some will be personal friends. Most will be after some sort of broadly utopian AI helps humanity future. Most are decent people. I predict neither side will want to bomb the other, even if they have the capability. There may be friendly rivalry or outright cooperation.
Thanks for your comment. This is something I should have stated a bit more explicitly.
When I mentioned “single state (or part thereof)”, the part thereof was referring to these groups or groups in other countries that are yet to be formed.
I think the chance of government intervention is quite high in the slow take-off scenario. It’s quite likely that any group successfully working on AGI will slowly but noticeably start to accumulate a lot of resources. If that cannot be concealed, it will start to attract a lot of attention. I think it is unlikely that the government and state bureaucracy would be content to let such resources accumulate untouched e.g. the current shifting attitude to Big Tech in Brussels and Washington.
In a fast take-off scenario, I think we can frame things more provocatively: the group that develops AGI either becomes the government, or the government takes control while it still can. I’m not sure what the relative probabilities are here, but in both circumstances you end up with something that will act like a state, and be treated as a state by other states, which is why I model them like a state in my analysis. For example, even if OpenAI and DeepMind are friendly to each other, and that persists over decades, I can easily imagine the Chinese state trying to develop an alternative that might not be friendly to those two groups, especially if the Chinese government perceive them as promoting a different model of government.