If you already think the CI applies to humans, why would it be strange to hear that it also applies to an AI? If you don’t think it applies to humans, then “not at all” could be “equal force”, and that would also be un-strange.
Depends how you define “imperative”. Is “maximize human CEV according to such-and-such equations” a deontological imperative or a consequentialist utility function?
If you already think the CI applies to humans, why would it be strange to hear that it also applies to an AI? If you don’t think it applies to humans, then “not at all” could be “equal force”, and that would also be un-strange.
Well spotted! But why is it NOT strange to hold that the CI applies to an AI? Isn’t the raison d’etre of AI to operate on hypothetical imperatives?
Depends how you define “imperative”. Is “maximize human CEV according to such-and-such equations” a deontological imperative or a consequentialist utility function?