They could possibly come up with an alternative, but we must consider that it very well may be the most efficient thing to do is to kill them, unless we implement goals that make the killing the least efficient option. If your going with AI, then there is another thing to consider: time. How much time would the AI spend considering its options and judging the person in question? Shortest amount of time possible? Longest? There is no such thing as an ultimate trade off.
I’m not sure of what exactly you’re trying to say here.
In other words, we have to set its goal as the ability to predict our values, which is a problem since you can’t make AI goals in english.
I’m not sure of what exactly you’re trying to say here.
Yup.