I wonder if the current state of the art corresponds to the pre-conscious level of evolution, before the internal narrator and self-awareness. Maybe soon the neural networks will develop the skill of explaining (or rationalizing) their decisions.
This seems pretty likely. An AI that does internal reasoning will find it useful to have its own opinions on why it thinks things, which need bear about as much relationship to their internal microscopic function as human opinions about thinking do to human neurons.
I wonder if the current state of the art corresponds to the pre-conscious level of evolution, before the internal narrator and self-awareness. Maybe soon the neural networks will develop the skill of explaining (or rationalizing) their decisions.
This seems pretty likely. An AI that does internal reasoning will find it useful to have its own opinions on why it thinks things, which need bear about as much relationship to their internal microscopic function as human opinions about thinking do to human neurons.