Contemporary AI agents that are based on neural networks are exactly like that. They do stuff they feel compelled to in the moment. If anything, they have less coherence than humans, and no capacity for introspection at all. I doubt that AI will magically go from this current, very sad state to a coherent agent. It might modify itself into being coherent some time after becoming super intelligent, but it won’t be coherent out of the box.
Interesting. I know very little about the ML field, and my impression from reading what the ML and AI alignment experts write on this site is that they model an AI as an agent to some degree, not just “do something incoherent at any given moment”.
I mean “do something incoherent at any given moment” is also perfectly agent-y behavior. Babies are agents, too.
I think the problem is modelling incoherent AI is even harder than modelling coherent AI, so most alignment researchers just hope that AI researchers will be able to build coherence in before there is a takeoff, so that they can base their own theories on the assumption that the AI is already coherent.
I find that view overly optimistic. I expect that AI is going to remain incoherent until long after it has become superintelligent.
Contemporary AI agents that are based on neural networks are exactly like that. They do stuff they feel compelled to in the moment. If anything, they have less coherence than humans, and no capacity for introspection at all. I doubt that AI will magically go from this current, very sad state to a coherent agent. It might modify itself into being coherent some time after becoming super intelligent, but it won’t be coherent out of the box.
Interesting. I know very little about the ML field, and my impression from reading what the ML and AI alignment experts write on this site is that they model an AI as an agent to some degree, not just “do something incoherent at any given moment”.
I mean “do something incoherent at any given moment” is also perfectly agent-y behavior. Babies are agents, too.
I think the problem is modelling incoherent AI is even harder than modelling coherent AI, so most alignment researchers just hope that AI researchers will be able to build coherence in before there is a takeoff, so that they can base their own theories on the assumption that the AI is already coherent.
I find that view overly optimistic. I expect that AI is going to remain incoherent until long after it has become superintelligent.