If consequences are completely ignored, I lean towards the torture, but if consequences are considered I would choose no torture out of hope it accelerates moral progress (at least if they had never seen someone who “aught to be tortured” get away, the first one might spark change. which might be good?). In the speck case, I choose torture.
I should say we assume that we’re deciding which one a stable, incorruptible AI should choose. I’m pretty sure any moral system which chose torture in situations like this would not lead to good outcomes if applied in a practical circumstance, but that’s not what I’m wondering about, I’m just trying to figure out which outcome is better. In short, I’m asking an axiological question, not a moral one. https://slatestarcodex.com/2017/08/28/contra-askell-on-moral-offsets/
My intuition strongly says that the torture is worse here even though I choose torture in the original, but I don’t have an argument for this because my normal axiological system, preference utilitarianism, seems to unavoidably say torture is better.
Although under strict preference utilitarianism, wouldn’t change in values/moral progress be considered bad, for the same reason a paperclip maximizer would consider it bad?
If consequences are completely ignored, I lean towards the torture, but if consequences are considered I would choose no torture out of hope it accelerates moral progress (at least if they had never seen someone who “aught to be tortured” get away, the first one might spark change. which might be good?). In the speck case, I choose torture.
I should say we assume that we’re deciding which one a stable, incorruptible AI should choose. I’m pretty sure any moral system which chose torture in situations like this would not lead to good outcomes if applied in a practical circumstance, but that’s not what I’m wondering about, I’m just trying to figure out which outcome is better. In short, I’m asking an axiological question, not a moral one. https://slatestarcodex.com/2017/08/28/contra-askell-on-moral-offsets/
My intuition strongly says that the torture is worse here even though I choose torture in the original, but I don’t have an argument for this because my normal axiological system, preference utilitarianism, seems to unavoidably say torture is better.
Although under strict preference utilitarianism, wouldn’t change in values/moral progress be considered bad, for the same reason a paperclip maximizer would consider it bad?