Some AGI researchers use the notion of a utility function to define what an AI “wants” to happen. How does the notion of a utility function differ from the notion of a will?
Some AGI researchers use the notion of a utility function to define what an AI “wants” to happen. How does the notion of a utility function differ from the notion of a will?