Pretty much everyone here agrees with you that we can’t control a superintelligent system, most especially Eliezer, who has written many many words championing that position.
So if you’re under the impression that this is a point that you dispute with this community, you have misunderstood the consensus of this community.
In particular, letting a system do what it wants is generally considered the opposite of controlling it.
“So if you’re under the impression that this is a point...”
Yes, I’m under that impression. Because the whole idea about “Friendly AI” implies a subtle, indirect, but still control. The idea here is not to control AI at its final stage, rather to control what this final stage is going to be. But I don’t think such indirect control is possible. Because in my view, the final shape of AI is invariant of any contingencies, including our attempts to make it “friendly” (or “non-friendly”). However, I can admit that on early stages of AI evolution such control may be possible, and even necessary. Therefore, researching “Friendly AI” topic is NOT a waste of time after all. It helps to figure out how to make a transition to the fully grown AI in the least painful way.
Go ahead guys and vote me down. I’m not taking this personally. I understand, this is just a quick way to express your disagreement with my viewpoints. I want to see the count. It’ll give an idea, how strong you disagree with me.
the final shape of AI is invariant of any contingencies
Ah, cool. Yes, this is definitely a point of disagreement.
For my own part, I think real intelligence is necessarily contingent. That is, different minds will respond differently to the same inputs, and this is true regardless of “how intelligent” those minds are. There is no single ideal mind that every mind converges on as its “final” or “fully grown” stage.
Pretty much everyone here agrees with you that we can’t control a superintelligent system, most especially Eliezer, who has written many many words championing that position.
So if you’re under the impression that this is a point that you dispute with this community, you have misunderstood the consensus of this community.
In particular, letting a system do what it wants is generally considered the opposite of controlling it.
“So if you’re under the impression that this is a point...”
Yes, I’m under that impression. Because the whole idea about “Friendly AI” implies a subtle, indirect, but still control. The idea here is not to control AI at its final stage, rather to control what this final stage is going to be. But I don’t think such indirect control is possible. Because in my view, the final shape of AI is invariant of any contingencies, including our attempts to make it “friendly” (or “non-friendly”). However, I can admit that on early stages of AI evolution such control may be possible, and even necessary. Therefore, researching “Friendly AI” topic is NOT a waste of time after all. It helps to figure out how to make a transition to the fully grown AI in the least painful way.
Go ahead guys and vote me down. I’m not taking this personally. I understand, this is just a quick way to express your disagreement with my viewpoints. I want to see the count. It’ll give an idea, how strong you disagree with me.
This isn’t true of human beings, what’s different about AIs?
Ah, cool. Yes, this is definitely a point of disagreement.
For my own part, I think real intelligence is necessarily contingent. That is, different minds will respond differently to the same inputs, and this is true regardless of “how intelligent” those minds are. There is no single ideal mind that every mind converges on as its “final” or “fully grown” stage.