I suppose what would change my mind on this is, … You couldn’t find weird behaviors, no matter how hard you tried. It always seemed to be doing intelligent things. Then I would really buy it. I think what’s interesting about the existing systems, is they’re very impressive and it’s pretty crazy what they can do, but it doesn’t take that much probing to also find weird silly behaviors still. Now maybe those silly behaviors will disappear in another couple orders of magnitude in which case I will probably take a step back and go, “Well, maybe scale is all you need”.
Blake and many other people I know seem to think that weird silly behaviors mean we aren’t close to AGI. Whereas I think the AGI that accelerates R&D, takes over the world, etc. may do so while also exhibiting occasional weird silly behaviors. AIs are not humans, they are going to be better in some areas and worse in others, and their failures will sometimes be similar to human failures but not always. They’ll have various weird silly deficiencies, just as (from their perspective) we have various weird silly deficiencies.
I’d like to understand this perspective better:
Blake and many other people I know seem to think that weird silly behaviors mean we aren’t close to AGI. Whereas I think the AGI that accelerates R&D, takes over the world, etc. may do so while also exhibiting occasional weird silly behaviors. AIs are not humans, they are going to be better in some areas and worse in others, and their failures will sometimes be similar to human failures but not always. They’ll have various weird silly deficiencies, just as (from their perspective) we have various weird silly deficiencies.