I’m not sure why some skepticism would be unjustified from lack of progress in robots.
Robots require reliability, because otherwise you destroy hardware and other material. Even in areas where we have had enormous progress, (LLMs, Diffusion) we do not have reliability, such that you can trust the output of them without supervision, broadly. So such lack of reliability seems indicative of perhaps some fundamental things yet to be learned.
The skepticism that I object to has less to do with the idea that ML systems are not robust enough to operate robots and more to do with people rationalizing based off of the intrinsic feeling that “robots are not scary enough to justify considering AGI a credible threat”. (Whether they voice this intuition or not)
I agree that having highly capable robots which operate off of ML would be evidence for AGI soon and thus the lack of such robots is evidence in the opposite direction.
That said, because the main threat from AGI that I am concerned about comes from reasoning and planning capabilities, I think it can be somewhat of a red herring. I’m not saying we shouldn’t update on the lack of competent robots, but I am saying that we shouldn’t flippantly use the intuition, “that robot can’t do all sorts of human tasks, I guess machines aren’t that smart and this isn’t a big deal yet”.
I am not trying to imply that this is the reasoning you are employing, but it is a type of reasoning I have seen in the wild. If anything, the lack of robustness in current ML systems might actually be more concerning overall, though I am uncertain about this.
I’m not sure why some skepticism would be unjustified from lack of progress in robots.
Robots require reliability, because otherwise you destroy hardware and other material. Even in areas where we have had enormous progress, (LLMs, Diffusion) we do not have reliability, such that you can trust the output of them without supervision, broadly. So such lack of reliability seems indicative of perhaps some fundamental things yet to be learned.
The skepticism that I object to has less to do with the idea that ML systems are not robust enough to operate robots and more to do with people rationalizing based off of the intrinsic feeling that “robots are not scary enough to justify considering AGI a credible threat”. (Whether they voice this intuition or not)
I agree that having highly capable robots which operate off of ML would be evidence for AGI soon and thus the lack of such robots is evidence in the opposite direction.
That said, because the main threat from AGI that I am concerned about comes from reasoning and planning capabilities, I think it can be somewhat of a red herring. I’m not saying we shouldn’t update on the lack of competent robots, but I am saying that we shouldn’t flippantly use the intuition, “that robot can’t do all sorts of human tasks, I guess machines aren’t that smart and this isn’t a big deal yet”.
I am not trying to imply that this is the reasoning you are employing, but it is a type of reasoning I have seen in the wild. If anything, the lack of robustness in current ML systems might actually be more concerning overall, though I am uncertain about this.