If you were bad at figuring out morality , you would be in jail. I am not sure what you mean by other people’s morality: I find the idea that there can be multiple ,valid effective moralities in society incoherent- like an economy where everyone has their own currency. You are not in jail so you learnt morality.(You don’t seem to believe morality is entirely hardwired , because you regard it as varying across short spans of time)
I also don’t know what you mean by an incorrect eextrapolation. If morality is objective, then most people might be wrong about it. However, an .AI will not pose a threat unless it is worse than the prevailing standard...the absolute standard does not matter.
Why would an .AI dumb enough to believe in 1950s morality be powerful enough to impose its views on a society that knows better?
Why wuld a smart AI lack mechanisms for disposing of concepts? How it could it self improve without such a mechanism ? If it’s too dumb to update,why would it be a threat?
If there is no NGI, there is no AGI. If there is no AGI, there is no threat of AGI. The threat posed by specialised optimisers is quite different...they can be boxed off if they cannot speak.
The failure modes of updateable UFs are wireheading failure modes, not destroy the world failure modes.
If you were bad at figuring out morality , you would be in jail. I am not sure what you mean by other people’s morality: I find the idea that there can be multiple ,valid effective moralities in society incoherent- like an economy where everyone has their own currency. You are not in jail so you learnt morality.(You don’t seem to believe morality is entirely hardwired , because you regard it as varying across short spans of time)
I also don’t know what you mean by an incorrect eextrapolation. If morality is objective, then most people might be wrong about it. However, an .AI will not pose a threat unless it is worse than the prevailing standard...the absolute standard does not matter.
Why would an .AI dumb enough to believe in 1950s morality be powerful enough to impose its views on a society that knows better?
Why wuld a smart AI lack mechanisms for disposing of concepts? How it could it self improve without such a mechanism ? If it’s too dumb to update,why would it be a threat?
If there is no NGI, there is no AGI. If there is no AGI, there is no threat of AGI. The threat posed by specialised optimisers is quite different...they can be boxed off if they cannot speak.
The failure modes of updateable UFs are wireheading failure modes, not destroy the world failure modes.