imo if we get close enough to aligned that “the AI doesn’t support euthanasia” is an issue, we’re well out of the valley of actually dangerous circumstances. Human values already vary extensively and this post feels like trying to cook out some sort of objectivity in a place it doesn’t really exist.
The horror story people are worried about is “we suffer a lot but the AI doesn’t care/makes it worse, and the AI doesn’t allow you to escape by death.”
imo if we get close enough to aligned that “the AI doesn’t support euthanasia” is an issue, we’re well out of the valley of actually dangerous circumstances. Human values already vary extensively and this post feels like trying to cook out some sort of objectivity in a place it doesn’t really exist.
The horror story people are worried about is “we suffer a lot but the AI doesn’t care/makes it worse, and the AI doesn’t allow you to escape by death.”