People associated with EA are likely to decide at some point that the normal rules for the organization do not apply to them, if they expect that they can generate a large enough positive impact in the world by disregarding those rules.
I am myself consequentialist at my core, but invoking consequentialism to justify breaking commitments, non-cooperation, theft, or whatever else is just a stupid bad policy (the notion of people doing this generates some strong emotions for me) that as a policy/algorithm, won’t result in accomplishing one’s consequentialist goals.
I fear what you say is not wholly inaccurate and is true of at least some in EA, I hope though it’s not that true of many.
Where it does get tricky is potentially unilateral pivotal acts about which I think go in this direction but also feel different from what you describe.
I am myself consequentialist at my core, but invoking consequentialism to justify breaking commitments, non-cooperation, theft, or whatever else is just a stupid bad policy (the notion of people doing this generates some strong emotions for me) that as a policy/algorithm, won’t result in accomplishing one’s consequentialist goals.
I fear what you say is not wholly inaccurate and is true of at least some in EA, I hope though it’s not that true of many.
Where it does get tricky is potentially unilateral pivotal acts about which I think go in this direction but also feel different from what you describe.