You’re right, it was not specific enough to contribute to the conversation. However, my point was very understandable, though general. I don’t believe that there is a control problem because I don’t believe AI means what most people think it does.
To elaborate, learning algorithms are just learning algorithms and always will be. No one in the actual practical world who is working on AI is trying to build anything like any sort of entity that has a will. And humans have forgotten about will for some reason, and that’s why they’re scared of AI.
Some AGI researchers use the notion of a utility function to define what an AI “wants” to happen. How does the notion of a utility function differ from the notion of a will?
You’re right, it was not specific enough to contribute to the conversation. However, my point was very understandable, though general. I don’t believe that there is a control problem because I don’t believe AI means what most people think it does.
To elaborate, learning algorithms are just learning algorithms and always will be. No one in the actual practical world who is working on AI is trying to build anything like any sort of entity that has a will. And humans have forgotten about will for some reason, and that’s why they’re scared of AI.
Some AGI researchers use the notion of a utility function to define what an AI “wants” to happen. How does the notion of a utility function differ from the notion of a will?
Will only matters for green lanterns.