There are philosophers who believe that any superintelligence will inevitably converge to some true code of morality, and that superintelligence is controllable? Who?
So, the moral realists believe a superintelligence will converge on true morality. Do they also believe that superintelligence is controllable? I had thought they would believe that superintelligence is uncontrollable, but approve of whatever it uncontrollably does.
Quite a few I know (not naming names, sorry!) who haven’t thought through the implications. Hell, I’ve only put the two facts together recently in this form.
It’s useful for hitting certain philosophers with. Canonical examples: moral realists sceptical of the potential power of AI.
There are philosophers who believe that any superintelligence will inevitably converge to some true code of morality, and that superintelligence is controllable? Who?
As far as I can tell, it’s pretty common for moral realists. More or less, the argument goes:
Morality is just what one ought to do, so anyone not suffering from akrasia that is correct about morality will do the moral thing
A superintelligence will be better than us at knowing facts about the world, like morality
(optional) A superintelligence will be better than us at avoiding akrasia
Therefore, a superintelligence will behave more morally than us, and will eventually converge on true morality.
So, the moral realists believe a superintelligence will converge on true morality. Do they also believe that superintelligence is controllable? I had thought they would believe that superintelligence is uncontrollable, but approve of whatever it uncontrollably does.
Ah, I missed that clause. Yes, that.
Quite a few I know (not naming names, sorry!) who haven’t thought through the implications. Hell, I’ve only put the two facts together recently in this form.