I have trouble understanding how one can tell an agent from a non-agent without having access to its “source code”, or at least its goals.
Goals are likely to be emergent in many systems, such as neural nets. OTOH, trainable neural would be corrigible.
I have trouble understanding how one can tell an agent from a non-agent without having access to its “source code”, or at least its goals.
Goals are likely to be emergent in many systems, such as neural nets. OTOH, trainable neural would be corrigible.