Please explain how say a trolley problem fits into your framework.
The correct choice is to check out who do you want to be killed and saved more, and what are, for instance, the social consequences of your actions. I don’t understand your question, it seems.
Suppose you don’t have any time to figure out which people would be better. And suppose no one else will know that you were able to pull a switch.
Then my current algorythms will do the habitual stuff I’m used to do in similar situations or randomly explore the possible outcomes (as in “play”), like in every other severely constrained situation.
Honestly, it seems like your notion of ethics is borderline psychopathic.
The correct choice is to check out who do you want to be killed and saved more, and what are, for instance, the social consequences of your actions. I don’t understand your question, it seems.
Suppose you don’t have any time to figure out which people would be better. And suppose no one else will know that you were able to pull a switch.
Honestly, it seems like your notion of ethics is borderline psychopathic.
Then my current algorythms will do the habitual stuff I’m used to do in similar situations or randomly explore the possible outcomes (as in “play”), like in every other severely constrained situation.
What does this mean?