If you already think the CI applies to humans, why would it be strange to hear that it also applies to an AI? If you don’t think it applies to humans, then “not at all” could be “equal force”, and that would also be un-strange.
Depends how you define “imperative”. Is “maximize human CEV according to such-and-such equations” a deontological imperative or a consequentialist utility function?
In reply, at a superficial level, the statement was intended as (wry) humor toward consequentialist friends in the community. Anyone who wrote the AI code presumably had a hypothetical imperative in mind: “You, the AI, must do such and such in order to reach specified ends, in this case reporting a truthful statement.” And that’s what AI does, right? But If the AI reports that deontology is the way to go and tells you that you owe AI reciprocal respect as a rational being bound by a certain priori duties and prohibitions, that sounds quite crazy—after all, it’s only code. Yet might our ready to hand conceptions of law and freedom predispose us to believe the statement? Should we believe it?
Kant’s categorical imperative applies with equal force to AI.
Kant thought it applied to space aliens and other hypothetical minds—why would that be strange?
If you already think the CI applies to humans, why would it be strange to hear that it also applies to an AI? If you don’t think it applies to humans, then “not at all” could be “equal force”, and that would also be un-strange.
Well spotted! But why is it NOT strange to hold that the CI applies to an AI? Isn’t the raison d’etre of AI to operate on hypothetical imperatives?
Depends how you define “imperative”. Is “maximize human CEV according to such-and-such equations” a deontological imperative or a consequentialist utility function?
What does that mean, exactly?
In reply, at a superficial level, the statement was intended as (wry) humor toward consequentialist friends in the community. Anyone who wrote the AI code presumably had a hypothetical imperative in mind: “You, the AI, must do such and such in order to reach specified ends, in this case reporting a truthful statement.” And that’s what AI does, right? But If the AI reports that deontology is the way to go and tells you that you owe AI reciprocal respect as a rational being bound by a certain priori duties and prohibitions, that sounds quite crazy—after all, it’s only code. Yet might our ready to hand conceptions of law and freedom predispose us to believe the statement? Should we believe it?