I’m not exactly sure about the whole effective altruism ultimatum of “more money equals more better”. Obviously it may be that the whole control problem is completely the wrong question. In my opinion, this is the case.
Seems like you posted this comment under a wrong article. This article is about artificial intelligence, more specifically about OpenAI.
I also noticed I have a problem to understand the meaning of your comments. Is there a way to make them easier to read, perhaps by providing more context? (For example, I have no idea what “it may be that the whole control problem is completely the wrong question” is supposed to mean.)
You’re right, it was not specific enough to contribute to the conversation. However, my point was very understandable, though general. I don’t believe that there is a control problem because I don’t believe AI means what most people think it does.
To elaborate, learning algorithms are just learning algorithms and always will be. No one in the actual practical world who is working on AI is trying to build anything like any sort of entity that has a will. And humans have forgotten about will for some reason, and that’s why they’re scared of AI.
Some AGI researchers use the notion of a utility function to define what an AI “wants” to happen. How does the notion of a utility function differ from the notion of a will?
I’m not exactly sure about the whole effective altruism ultimatum of “more money equals more better”. Obviously it may be that the whole control problem is completely the wrong question. In my opinion, this is the case.
Seems like you posted this comment under a wrong article. This article is about artificial intelligence, more specifically about OpenAI.
I also noticed I have a problem to understand the meaning of your comments. Is there a way to make them easier to read, perhaps by providing more context? (For example, I have no idea what “it may be that the whole control problem is completely the wrong question” is supposed to mean.)
You’re right, it was not specific enough to contribute to the conversation. However, my point was very understandable, though general. I don’t believe that there is a control problem because I don’t believe AI means what most people think it does.
To elaborate, learning algorithms are just learning algorithms and always will be. No one in the actual practical world who is working on AI is trying to build anything like any sort of entity that has a will. And humans have forgotten about will for some reason, and that’s why they’re scared of AI.
Some AGI researchers use the notion of a utility function to define what an AI “wants” to happen. How does the notion of a utility function differ from the notion of a will?
Will only matters for green lanterns.