With the strawberries thing, the point isn’t that it couldn’t do those things, but that it won’t want to. After making itself smart enough to engineer nanotech, it’s developing ‘mind’ will have run off in unintended directions and it will have wildly different goals that what we wanted it to have.
Quoting EY from this video: “the whole thing I’m saying is that we do not know how to get goals into a system.” <-- This is the entire thing that researchers are trying to figure out how to do.
With limited scope non agentic systems we can set goals, and do. Each subsystem in the “strawberry project” stack has to be trained in a simulation of many examples of the task space it will face, and optimized for policies that satisfy the simulator goals.
Why do you believe this? Nanotech engineering does not require social or deceptive capabilities. It requires deep and precise knowledge of nanoscale physics and the limitations of manipulation equipment, and probably a large amount of working memory—so beyond human capacity—but why would it need to be anything but a large model? It needs not even be agentic.
“think about it for 5 minutes” and think about how you might create a working general intelligence. I suggest looking at the GATO paper for inspiration.
With the strawberries thing, the point isn’t that it couldn’t do those things, but that it won’t want to. After making itself smart enough to engineer nanotech, it’s developing ‘mind’ will have run off in unintended directions and it will have wildly different goals that what we wanted it to have.
Quoting EY from this video: “the whole thing I’m saying is that we do not know how to get goals into a system.” <-- This is the entire thing that researchers are trying to figure out how to do.
With limited scope non agentic systems we can set goals, and do. Each subsystem in the “strawberry project” stack has to be trained in a simulation of many examples of the task space it will face, and optimized for policies that satisfy the simulator goals.
But not with something powerful enough to engineer nanotech.
Why do you believe this? Nanotech engineering does not require social or deceptive capabilities. It requires deep and precise knowledge of nanoscale physics and the limitations of manipulation equipment, and probably a large amount of working memory—so beyond human capacity—but why would it need to be anything but a large model? It needs not even be agentic.
At that level of power, I imagine that general intelligence will be a lot easier to create.
“think about it for 5 minutes” and think about how you might create a working general intelligence. I suggest looking at the GATO paper for inspiration.