The most obvious reason for skepticism of the impact that would cause follows.
David Manheim: I do think that Leopold is underrating how slow much of the economy will be to adopt this. (And so I expect there to be huge waves of bankruptcies of firms that are displaced / adapted slowly, and resulting concentration of power- but also some delay as assets change hands.)
I do not think Leopold is making that mistake. I think Leopold is saying a combination of the remote worker being a seamless integration, and also not much caring about how fast most businesses adapt to it. As long as the AI labs (and those in their supply chains?) are using the drop-in workers, who else does so mostly does not matter. The local grocery store refusing to cut its operational costs won’t much postpone the singularity.
I want to clarify the point I was making—I don’t think that this directly changes the trajectory of AI capabilities, I think it changes the speed at which the world wakes up to those possibilities. That is, I think that in worlds with the pace of advances he posits, the impacts on the economy are slower than in AI, and we get faster capabilities takeoff than we do in economic impacts that make the transformation fully obvious to the rest of the world.
The more important point, in my mind, is what this means for geopolitics, which I think aligns with your skepticism. As I said responding to Leopold’s original tweet: “I think that as the world wakes up to the reality, the dynamics change. The part of the extensive essay I think is least well supported, and least likely to play out as envisioned, is the geopolitical analysis. (Minimally, there’s at least as much uncertainty as AI timelines!)”
I think the essay showed lots of caveats and hedging about the question of capabilities and timelines, but then told a single story about geopolitics—one that I think it both unlikely, and that fails to notice the critical fact—that this is describing a world where government is smart enough to act quickly, but not smart enough to notice that we all die very soon. To quote myself again, “I think [this describes] a weird world where military / government “gets it” that AGI will be a strategic decisive advantage quickly enough to nationalize labs, but never gets the message that this means it’s inevitable that there will be loss of control at best.”
I want to clarify the point I was making—I don’t think that this directly changes the trajectory of AI capabilities, I think it changes the speed at which the world wakes up to those possibilities. That is, I think that in worlds with the pace of advances he posits, the impacts on the economy are slower than in AI, and we get faster capabilities takeoff than we do in economic impacts that make the transformation fully obvious to the rest of the world.
The more important point, in my mind, is what this means for geopolitics, which I think aligns with your skepticism. As I said responding to Leopold’s original tweet: “I think that as the world wakes up to the reality, the dynamics change. The part of the extensive essay I think is least well supported, and least likely to play out as envisioned, is the geopolitical analysis. (Minimally, there’s at least as much uncertainty as AI timelines!)”
I think the essay showed lots of caveats and hedging about the question of capabilities and timelines, but then told a single story about geopolitics—one that I think it both unlikely, and that fails to notice the critical fact—that this is describing a world where government is smart enough to act quickly, but not smart enough to notice that we all die very soon. To quote myself again, “I think [this describes] a weird world where military / government “gets it” that AGI will be a strategic decisive advantage quickly enough to nationalize labs, but never gets the message that this means it’s inevitable that there will be loss of control at best.”