If you’re a consequentialist, trolley problems are easy.
Only if you know whether or not someone is watching!
That is, getting caught not acting like a deontologist is a consequence that must sometimes be avoided. This becomes relevant when considering, for example, whether to murder AGI developers with a relatively small chance of managing friendliness but a high chance of managing recursive self improvement.
This becomes relevant when considering, for example, whether to murder AGI developers with a relatively small chance of managing friendliness but a high chance of managing recursive self improvement.
Relevant, perhaps, but if you absolutely can’t talk them out of it, the negative expected utility of allowing them to continue could outweigh that of being imprisoned for murder by a great deal.
Of course, it would take a very atypical person to actually carry through on that choice, but if humans weren’t so poorly built for utility calculations we might not even need AGI in the first place.
Only if you know whether or not someone is watching!
That is, getting caught not acting like a deontologist is a consequence that must sometimes be avoided. This becomes relevant when considering, for example, whether to murder AGI developers with a relatively small chance of managing friendliness but a high chance of managing recursive self improvement.
Relevant, perhaps, but if you absolutely can’t talk them out of it, the negative expected utility of allowing them to continue could outweigh that of being imprisoned for murder by a great deal.
Of course, it would take a very atypical person to actually carry through on that choice, but if humans weren’t so poorly built for utility calculations we might not even need AGI in the first place.