Well spotted! But why is it NOT strange to hold that the CI applies to an AI? Isn’t the raison d’etre of AI to operate on hypothetical imperatives?
Depends how you define “imperative”. Is “maximize human CEV according to such-and-such equations” a deontological imperative or a consequentialist utility function?
Well spotted! But why is it NOT strange to hold that the CI applies to an AI? Isn’t the raison d’etre of AI to operate on hypothetical imperatives?
Depends how you define “imperative”. Is “maximize human CEV according to such-and-such equations” a deontological imperative or a consequentialist utility function?