...if you claim that any superintelligence will inevitably converge to some true code of morality, then you are also claiming that no measures can be taken by its creators to prevent this convergence.
...if you claim that any superintelligent oracle will inevitably return the same answer given the same question, then you are also claiming that no measures can be taken by its creators to make it return a different answer.
Sounds uncontroversial, to me. I wouldn’t expect to be able to create a non-broken AI, even a comparitively trivial one, that thinks 1+1=3. On the other hand, I do think I could create comparitively trivial AIs that leverage their knowledge of arithmetic to accomplish widely varying ends. Simultaneous Location and Mapping, for example, works for a search and rescue bot or a hunt/kill bot.
Not exactly true… You need to conclude “can be taken by its creators to make it return a different answer while it remains an Oracle”. With that caveat inserted, I’m not sure what your point is… Depending on how you define the terms, either your implication is true by definition, or the premise is agreed to be false by pretty much everyone.
You need to conclude “can be taken by its creators to make it return a different answer while it remains an Oracle”. With that caveat inserted, I’m not sure what your point is...
That was my point. If you accept the premise that superintelligence implies the adoption some sort of objective moral conduct, then it is no different from an oracle returning correct answers. You can’t change that behavior and retain superintelligence. You’ll end up with a retarded intelligence.
I was just stating an analog example that highlights the tautological nature of your post. But I suppose that was your intention anyway.
...if you claim that any superintelligent oracle will inevitably return the same answer given the same question, then you are also claiming that no measures can be taken by its creators to make it return a different answer.
Sounds uncontroversial, to me. I wouldn’t expect to be able to create a non-broken AI, even a comparitively trivial one, that thinks 1+1=3. On the other hand, I do think I could create comparitively trivial AIs that leverage their knowledge of arithmetic to accomplish widely varying ends. Simultaneous Location and Mapping, for example, works for a search and rescue bot or a hunt/kill bot.
Not exactly true… You need to conclude “can be taken by its creators to make it return a different answer while it remains an Oracle”. With that caveat inserted, I’m not sure what your point is… Depending on how you define the terms, either your implication is true by definition, or the premise is agreed to be false by pretty much everyone.
That was my point. If you accept the premise that superintelligence implies the adoption some sort of objective moral conduct, then it is no different from an oracle returning correct answers. You can’t change that behavior and retain superintelligence. You’ll end up with a retarded intelligence.
I was just stating an analog example that highlights the tautological nature of your post. But I suppose that was your intention anyway.
Ah, ok :-) It just felt it was pulling intuitions in a different direction!