There’s a bad argument against AGI risk that goes kinda like this:
Transformers will not scale to AGI.
Ergo, worrying about AGI risk is silly.
Hey, while you’re here, let me tell you about this other R&D path which will TOTALLY lead to AGI … …
Thanks for listening! (applause)
My read is that this blog post has that basic structure. He goes through an elaborate argument and eventually winds up in Section 10 where he argues that a language model trained on internet data won’t be a powerful agent that gets things done in the world, but, if we train an embodied AI with a robot body, then it could be a powerful agent that gets things done in the world.
And my response is: “OK fine, whatever”. Let’s consider the hypothesis “we need to train an embodied AI with a robot body in order to get a powerful agent that gets things done in the world”. If that’s true, well, people are perfectly capable of training AIs with robot bodies! And if that’s really the only possible way to build a powerful AGI that gets things done in the world, then I have complete confidence that sooner or later people will do that!!
We can argue about whether the hypothesis is correct, but it’s fundamentally not a crazy hypothesis, and it seems to me that if the hypothesis is true then it changes essentially nothing about the core arguments for AGI risk. Just because the AI was trained using a robot body doesn’t mean it can’t crush humanity, and also doesn’t mean that it won’t want to.
In Venkatesh’s post, the scenario where “people build an embodied AI with a robot body” is kinda thrown in at the bottom, as if it were somehow a reductio ad absurdum?? I’m not crystal clear on whether Venkatesh thinks that such an AI (A) won’t get created in the first place, or (B) won’t be able to crush humanity, or (C) won’t want to crush humanity. I guess probably (B)? There’s kinda a throwaway reference to (C) but not an argument. A lot of the post could be taken as an argument against (B), in which case I strongly disagree for the usual reasons, see for example §3.2 here (going through well-defined things that an AGI could absolutely do sooner or later, like run the same algorithms as John von Neumann’s brain but 100× faster and with the ability to instantly spin off clone copies etc.), or §1.6 here (for why radically superhuman capabilities seem unnecessary for crushing humanity anyway).
(Having a robot body does not prevent self-reproducing—the AGI could presumably copy its mind into an AI with a similar virtual robot body in a VR environment, and then it’s no longer limited by robot bodies, all it would need is compute.)
(I kinda skimmed, sorry to everyone if I’m misreading / mischaracterizing!)
There’s a bad argument against AGI risk that goes kinda like this:
Transformers will not scale to AGI.
Ergo, worrying about AGI risk is silly.
Hey, while you’re here, let me tell you about this other R&D path which will TOTALLY lead to AGI … …
Thanks for listening! (applause)
My read is that this blog post has that basic structure. He goes through an elaborate argument and eventually winds up in Section 10 where he argues that a language model trained on internet data won’t be a powerful agent that gets things done in the world, but, if we train an embodied AI with a robot body, then it could be a powerful agent that gets things done in the world.
And my response is: “OK fine, whatever”. Let’s consider the hypothesis “we need to train an embodied AI with a robot body in order to get a powerful agent that gets things done in the world”. If that’s true, well, people are perfectly capable of training AIs with robot bodies! And if that’s really the only possible way to build a powerful AGI that gets things done in the world, then I have complete confidence that sooner or later people will do that!!
We can argue about whether the hypothesis is correct, but it’s fundamentally not a crazy hypothesis, and it seems to me that if the hypothesis is true then it changes essentially nothing about the core arguments for AGI risk. Just because the AI was trained using a robot body doesn’t mean it can’t crush humanity, and also doesn’t mean that it won’t want to.
In Venkatesh’s post, the scenario where “people build an embodied AI with a robot body” is kinda thrown in at the bottom, as if it were somehow a reductio ad absurdum?? I’m not crystal clear on whether Venkatesh thinks that such an AI (A) won’t get created in the first place, or (B) won’t be able to crush humanity, or (C) won’t want to crush humanity. I guess probably (B)? There’s kinda a throwaway reference to (C) but not an argument. A lot of the post could be taken as an argument against (B), in which case I strongly disagree for the usual reasons, see for example §3.2 here (going through well-defined things that an AGI could absolutely do sooner or later, like run the same algorithms as John von Neumann’s brain but 100× faster and with the ability to instantly spin off clone copies etc.), or §1.6 here (for why radically superhuman capabilities seem unnecessary for crushing humanity anyway).
(Having a robot body does not prevent self-reproducing—the AGI could presumably copy its mind into an AI with a similar virtual robot body in a VR environment, and then it’s no longer limited by robot bodies, all it would need is compute.)
(I kinda skimmed, sorry to everyone if I’m misreading / mischaracterizing!)