Rather, I meant to say: I expect LW posters to largely agree that it can be correct to select an option which has lower expected utility according to naive calculation so as to prevent such situations from arising in the first place (in that it is correct to have a decision function that selects such options, and that if you don’t actually select such options then you don’t have that decision function). It seems possibly reasonable to construe an organization having access to high utility but opposing specific human rights issues as creating such a situation (I do not comment on whether or not this is actually the case in our world).
Rather, I meant to say: I expect LW posters to largely agree that it can be correct to select an option which has lower expected utility according to naive calculation so as to prevent such situations from arising in the first place (in that it is correct to have a decision function that selects such options, and that if you don’t actually select such options then you don’t have that decision function). It seems possibly reasonable to construe an organization having access to high utility but opposing specific human rights issues as creating such a situation (I do not comment on whether or not this is actually the case in our world).