I should say we assume that we’re deciding which one a stable, incorruptible AI should choose. I’m pretty sure any moral system which chose torture in situations like this would not lead to good outcomes if applied in a practical circumstance, but that’s not what I’m wondering about, I’m just trying to figure out which outcome is better. In short, I’m asking an axiological question, not a moral one. https://slatestarcodex.com/2017/08/28/contra-askell-on-moral-offsets/
My intuition strongly says that the torture is worse here even though I choose torture in the original, but I don’t have an argument for this because my normal axiological system, preference utilitarianism, seems to unavoidably say torture is better.
I should say we assume that we’re deciding which one a stable, incorruptible AI should choose. I’m pretty sure any moral system which chose torture in situations like this would not lead to good outcomes if applied in a practical circumstance, but that’s not what I’m wondering about, I’m just trying to figure out which outcome is better. In short, I’m asking an axiological question, not a moral one. https://slatestarcodex.com/2017/08/28/contra-askell-on-moral-offsets/
My intuition strongly says that the torture is worse here even though I choose torture in the original, but I don’t have an argument for this because my normal axiological system, preference utilitarianism, seems to unavoidably say torture is better.