Suppose you don’t have any time to figure out which people would be better. And suppose no one else will know that you were able to pull a switch.
Then my current algorythms will do the habitual stuff I’m used to do in similar situations or randomly explore the possible outcomes (as in “play”), like in every other severely constrained situation.
Honestly, it seems like your notion of ethics is borderline psychopathic.
Then my current algorythms will do the habitual stuff I’m used to do in similar situations or randomly explore the possible outcomes (as in “play”), like in every other severely constrained situation.
What does this mean?