Ah, thank you, I get it now. I guess for me deontology is just a bunch of consequentialist computational shortcuts, necessary because of the limited computational capacity of human brain and because of the buggy wetware.
Presumably the AI in this failed utopia would not need deontology, since it has enough power and reliability to recompute the rules every time it needs to make a decision based on terminal goals, not intermediate ones, and so it would not be vulnerable to lost purposes.
Ah, thank you, I get it now. I guess for me deontology is just a bunch of consequentialist computational shortcuts, necessary because of the limited computational capacity of human brain and because of the buggy wetware.
Presumably the AI in this failed utopia would not need deontology, since it has enough power and reliability to recompute the rules every time it needs to make a decision based on terminal goals, not intermediate ones, and so it would not be vulnerable to lost purposes.
The 107 rules are all deontological, unless one of them is “maximize happiness”.