None of the AIs that can replace people are actually ready to replace people. But in general, people aren’t sure how to generalize this far out of distribution. A lot of people are trying to use AI to take over the world already in the form of startups, and many who get their income from ownership of contracts and objects are seeking out ways to own enforcement rights to take the value of others people’s future by making bets on trades of contracts such as stocks and loans—you know, the same way they were before there was AI to bet on. The one-way pattern risk from AI proceeds as expected, it’s just moving slower and is more human-mediated than yudkowsky expected. There will be no sudden foom; what you fear is that humanity will be replaced by AI economically, and the replacement will slowly grind away the poorest, until the richest are all AI owners, and then eventually the richest will all be AIs and the replacement is complete. I have no reassurance—this is the true form of the AI safety problem: control-seeking patterns in reality. The inter-agent safety problem.
I expect humanity to have been fully replaced come ten years from now, but at no point will it be sudden. Disempowerment will be incremental. The billionaires will be last, at least as long as ownership structures survive at all. When things finally switch over completely, it will look like some sort of new currency created that only AIs are able to make use of, thereby giving them strong ai-only competitive cooperation.
None of the AIs that can replace people are actually ready to replace people. But in general, people aren’t sure how to generalize this far out of distribution. A lot of people are trying to use AI to take over the world already in the form of startups, and many who get their income from ownership of contracts and objects are seeking out ways to own enforcement rights to take the value of others people’s future by making bets on trades of contracts such as stocks and loans—you know, the same way they were before there was AI to bet on. The one-way pattern risk from AI proceeds as expected, it’s just moving slower and is more human-mediated than yudkowsky expected. There will be no sudden foom; what you fear is that humanity will be replaced by AI economically, and the replacement will slowly grind away the poorest, until the richest are all AI owners, and then eventually the richest will all be AIs and the replacement is complete. I have no reassurance—this is the true form of the AI safety problem: control-seeking patterns in reality. The inter-agent safety problem.
I expect humanity to have been fully replaced come ten years from now, but at no point will it be sudden. Disempowerment will be incremental. The billionaires will be last, at least as long as ownership structures survive at all. When things finally switch over completely, it will look like some sort of new currency created that only AIs are able to make use of, thereby giving them strong ai-only competitive cooperation.