I’m assuming this is just a deontological rule along the lines, “If X happens, shut down.” (If the Programmer was dumb enough to assign a super-high utility to shutting down after X happens, this would explain the whole scenario—the AI did the whole thing just to get shut down ASAP which had super-high utility—but I’m not assuming the Programmer was that stupid.)
Ah, thank you, I get it now. I guess for me deontology is just a bunch of consequentialist computational shortcuts, necessary because of the limited computational capacity of human brain and because of the buggy wetware.
Presumably the AI in this failed utopia would not need deontology, since it has enough power and reliability to recompute the rules every time it needs to make a decision based on terminal goals, not intermediate ones, and so it would not be vulnerable to lost purposes.
It doesn’t want to avoid it. Why would it?
I thought that hitting a precaution is a penalty in its utility function. I must be missing something.
I’m assuming this is just a deontological rule along the lines, “If X happens, shut down.” (If the Programmer was dumb enough to assign a super-high utility to shutting down after X happens, this would explain the whole scenario—the AI did the whole thing just to get shut down ASAP which had super-high utility—but I’m not assuming the Programmer was that stupid.)
Ah, thank you, I get it now. I guess for me deontology is just a bunch of consequentialist computational shortcuts, necessary because of the limited computational capacity of human brain and because of the buggy wetware.
Presumably the AI in this failed utopia would not need deontology, since it has enough power and reliability to recompute the rules every time it needs to make a decision based on terminal goals, not intermediate ones, and so it would not be vulnerable to lost purposes.
The 107 rules are all deontological, unless one of them is “maximize happiness”.