In theory, sure. In practice, there’s a large number of social dynamics, involving things such as people’s tendency to abuse power, that would make this option non-worthwhile.
Allright, so what if it was done by a hypothetical super intelligent AI or an omnicient being of somesort. Would you be ok with it then?
Similar considerations apply to a lot of other things, including many of the ones you mention, such as creating an “eye for eye” society. Yes, you could get overall bad results if you just single-mindedly optimized for one or two variables, but that’s why we try to look at the whole picture.
This is exactly what I mean. What are we trying to “optimize” for?
Allright, so what if it was done by a hypothetical super intelligent AI or an omnicient being of somesort. Would you be ok with it then?
Probably not, because it really was a super-intelligent AI, it could solve the problem without needing to kill anyone.
This is exactly what I mean. What are we trying to “optimize” for?
For general well-being. Something among the lines of “the amount of happiness minus the amount of suffering”, or “the successful implementation of preferences” would probably be a decent first approximation, but even those have plenty of caveats (we probably wouldn’t want to just turn everyone to wireheads, for instance). Human values are too complex to really be summed in any brief description. Or book-length ones, for that matter.
Probably not, because it really was a super-intelligent AI, it could solve the problem without needing to kill anyone.
They could possibly come up with an alternative, but we must consider that it very well may be the most efficient thing to do is to kill them, unless we implement goals that make the killing the least efficient option. If your going with AI, then there is another thing to consider: time. How much time would the AI spend considering its options and judging the person in question? Shortest amount of time possible? Longest? There is no such thing as an ultimate trade off.
For general well-being. Something among the lines of “the amount of happiness minus the amount of suffering”, or “the successful implementation of preferences” would probably be a decent first approximation, but even those have plenty of caveats (we probably wouldn’t want to just turn everyone to wireheads, for instance). Human values are too complex to really be summed in any brief description. Or book-length ones, for that matter.
In other words, we have to set its goal as the ability to predict our values, which is a problem since you can’t make AI goals in english.
They could possibly come up with an alternative, but we must consider that it very well may be the most efficient thing to do is to kill them, unless we implement goals that make the killing the least efficient option. If your going with AI, then there is another thing to consider: time. How much time would the AI spend considering its options and judging the person in question? Shortest amount of time possible? Longest? There is no such thing as an ultimate trade off.
I’m not sure of what exactly you’re trying to say here.
In other words, we have to set its goal as the ability to predict our values, which is a problem since you can’t make AI goals in english.
Allright, so what if it was done by a hypothetical super intelligent AI or an omnicient being of somesort. Would you be ok with it then?
This is exactly what I mean. What are we trying to “optimize” for?
Probably not, because it really was a super-intelligent AI, it could solve the problem without needing to kill anyone.
For general well-being. Something among the lines of “the amount of happiness minus the amount of suffering”, or “the successful implementation of preferences” would probably be a decent first approximation, but even those have plenty of caveats (we probably wouldn’t want to just turn everyone to wireheads, for instance). Human values are too complex to really be summed in any brief description. Or book-length ones, for that matter.
They could possibly come up with an alternative, but we must consider that it very well may be the most efficient thing to do is to kill them, unless we implement goals that make the killing the least efficient option. If your going with AI, then there is another thing to consider: time. How much time would the AI spend considering its options and judging the person in question? Shortest amount of time possible? Longest? There is no such thing as an ultimate trade off.
In other words, we have to set its goal as the ability to predict our values, which is a problem since you can’t make AI goals in english.
I’m not sure of what exactly you’re trying to say here.
Yup.