IDK what the previous post had in mind, but one possibility is that an AGI with superhuman social and human manipulation capabilities wouldn’t strictly need advanced robotics to take arbitrary physical actions in the world.
This is a something I frequently get hung up on: If the AGI is highly intelligent and socially manipulative, but lacks good motor skills/advanced robotics, doesn’t that imply that it also lacks an important spatial sense necessary to understand, manipulate, or design physical objects? Even if it could manipulate humans to take arbitrarily precise physical actions, it would need pretty good spatial reasoning to know what the expected outcome of those actions is.
I guess the AGI could just solve the problem of human alignment, so our superior motor and engineering skills don’t carelessly bring it to harm.
There are robotics transformers and general purpose models like Gato that can control robotics.
If AGI is extremely close, the reason is criticality. All the pieces for an AGI system that has general capabilities including working memory, robotics control, perception, “scratch” mind spaces including some that can model 3d relationships, exist in separate papers.
Normally it would take humans years, likely decade of methodical work building more complex integrated systems, but current AI may be good enough to bootstrap there in a short time, assuming a very large robotics hardware and compute budget.
IDK what the previous post had in mind, but one possibility is that an AGI with superhuman social and human manipulation capabilities wouldn’t strictly need advanced robotics to take arbitrary physical actions in the world.
This is a something I frequently get hung up on: If the AGI is highly intelligent and socially manipulative, but lacks good motor skills/advanced robotics, doesn’t that imply that it also lacks an important spatial sense necessary to understand, manipulate, or design physical objects? Even if it could manipulate humans to take arbitrarily precise physical actions, it would need pretty good spatial reasoning to know what the expected outcome of those actions is.
I guess the AGI could just solve the problem of human alignment, so our superior motor and engineering skills don’t carelessly bring it to harm.
There are robotics transformers and general purpose models like Gato that can control robotics.
If AGI is extremely close, the reason is criticality. All the pieces for an AGI system that has general capabilities including working memory, robotics control, perception, “scratch” mind spaces including some that can model 3d relationships, exist in separate papers.
Normally it would take humans years, likely decade of methodical work building more complex integrated systems, but current AI may be good enough to bootstrap there in a short time, assuming a very large robotics hardware and compute budget.