This seems pretty likely. An AI that does internal reasoning will find it useful to have its own opinions on why it thinks things, which need bear about as much relationship to their internal microscopic function as human opinions about thinking do to human neurons.
This seems pretty likely. An AI that does internal reasoning will find it useful to have its own opinions on why it thinks things, which need bear about as much relationship to their internal microscopic function as human opinions about thinking do to human neurons.