I think the cooperative advantages mentioned here have really been overlooked when it comes to forecasting AI impacts, especially in slow takeoff scenarios. A lot of forecasts, like what WFLL, mainly posit AI’s competing with each other. Consequently Molochian dynamics come into play and humans easily lose control of the future. But with these sorts of cooperative advantages, AIs are in an excellent position to not be subject to those forces and all the strategic disadvantages they bring with them. This applies even if an AI is “merely” at the human level. I could easily see an outcome that from a human perspective looks like a singleton taking over, but is in reality a collective of similar/identical AI’s working together with superhuman coordination capabilities.
I’ll also add source-code-swapping and greater transparency to the list of cooperative advantages at an AI’s disposal. Different AIs that would normally get stuck in a multipolar traps might not stay stuck for long if they can do things analogous to source code swap prisoners dilemmas.
For the most part, to the extent that we will have these advantages, it still doesn’t suggest a discontinuity; it suggests that we will be able to automate tasks with weaker / less intelligent AI systems than you might otherwise have thought.
This applies even if an AI is “merely” at the human level.
I usually think of Part 1 of WFLL happening prior to reaching what I would call human level AI, because of these AI advantages. Though the biggest AI advantage feeding into this is simply that AI systems can be specialized to particular tasks, whereas humans become general reasoners and then apply their general reasoning to particular tasks.
I think the cooperative advantages mentioned here have really been overlooked when it comes to forecasting AI impacts, especially in slow takeoff scenarios. A lot of forecasts, like what WFLL, mainly posit AI’s competing with each other. Consequently Molochian dynamics come into play and humans easily lose control of the future. But with these sorts of cooperative advantages, AIs are in an excellent position to not be subject to those forces and all the strategic disadvantages they bring with them. This applies even if an AI is “merely” at the human level. I could easily see an outcome that from a human perspective looks like a singleton taking over, but is in reality a collective of similar/identical AI’s working together with superhuman coordination capabilities.
I’ll also add source-code-swapping and greater transparency to the list of cooperative advantages at an AI’s disposal. Different AIs that would normally get stuck in a multipolar traps might not stay stuck for long if they can do things analogous to source code swap prisoners dilemmas.
For the most part, to the extent that we will have these advantages, it still doesn’t suggest a discontinuity; it suggests that we will be able to automate tasks with weaker / less intelligent AI systems than you might otherwise have thought.
I usually think of Part 1 of WFLL happening prior to reaching what I would call human level AI, because of these AI advantages. Though the biggest AI advantage feeding into this is simply that AI systems can be specialized to particular tasks, whereas humans become general reasoners and then apply their general reasoning to particular tasks.