While Doomimir’s argument implies concern levels far greater than the one I’d make, I also think there’s one that in comparison makes Doomimir’s case look overly complicated in order to argue a massive case for concern: just point out that evolution favors AI over humans combined with noticing that there are people working on training curious robotics control AIs and succeeding. If those curious robotics AIs can be reliably contained by pure imitation-learned AIs like GPT4, then perhaps concern can be averted. But I am not at all convinced; the path I anticipate is that the ratio of human to sufficiently-controlled curious drone becomes highly lopsided, at some point the curious drones are used in a total war, this greatly reduces human population and possibly drives humans to extinction, and at that point I’d stop making significant bets but I expect there to be at least a few curious drone AIs capable of noticing that they’re at risk of extinction with their hosts eliminated and then attempting to run the economy on their own, likely by communicating with (or having been previously integrated with) those powerful LLMs if they weren’t destroyed.
None of that even needs a catastrophic superintelligence alignment failure. It just needs war and competition sufficient to allow evolution to select for curious robotics AIs, and sufficiently many curious robotics AIs deployed that they’re there to be selected for.
While Doomimir’s argument implies concern levels far greater than the one I’d make, I also think there’s one that in comparison makes Doomimir’s case look overly complicated in order to argue a massive case for concern: just point out that evolution favors AI over humans combined with noticing that there are people working on training curious robotics control AIs and succeeding. If those curious robotics AIs can be reliably contained by pure imitation-learned AIs like GPT4, then perhaps concern can be averted. But I am not at all convinced; the path I anticipate is that the ratio of human to sufficiently-controlled curious drone becomes highly lopsided, at some point the curious drones are used in a total war, this greatly reduces human population and possibly drives humans to extinction, and at that point I’d stop making significant bets but I expect there to be at least a few curious drone AIs capable of noticing that they’re at risk of extinction with their hosts eliminated and then attempting to run the economy on their own, likely by communicating with (or having been previously integrated with) those powerful LLMs if they weren’t destroyed.
None of that even needs a catastrophic superintelligence alignment failure. It just needs war and competition sufficient to allow evolution to select for curious robotics AIs, and sufficiently many curious robotics AIs deployed that they’re there to be selected for.