There are what get called “stand by the Levers of Power” strategies and I don’t know if they’re good, but things like getting into positions within companies and governments that let you push for better AI outcomes, and I do think SBF might have made that a lot harder.
I think this is an important point: one idea that is very easy to take away from the FTX and OpenAI situations is something like
People associated with EA are likely to decide at some point that the normal rules for the organization do not apply to them, if they expect that they can generate a large enough positive impact in the world by disregarding those rules. Any agreement you make with an EA-associated person should be assumed to have an “unless I think the world would be better if I broke this agreement” rider (in addition to the usual “unless I stand to personally gain a lot by breaking this agreement” rider that people already expect and have developed mitigations for).
Basically, I expect that the strategy of “attempt to get near the levers of power in order to be able to execute weird plans where, if the people in charge of the decision about whether to let you near the levers of power knew about your plans, they never would have let you near the levers of power in the first place” will work less well for EAs in the future. To the extent that EAs actually have a tendency to attempt those sorts of plans, it’s probably good that people are aware of that tendency.
But if you start from the premise of “EAs having more ability to influence the world is good, and the reason they have that ability is not relevant”, then this weekend was probably quite bad.
People associated with EA are likely to decide at some point that the normal rules for the organization do not apply to them, if they expect that they can generate a large enough positive impact in the world by disregarding those rules.
I am myself consequentialist at my core, but invoking consequentialism to justify breaking commitments, non-cooperation, theft, or whatever else is just a stupid bad policy (the notion of people doing this generates some strong emotions for me) that as a policy/algorithm, won’t result in accomplishing one’s consequentialist goals.
I fear what you say is not wholly inaccurate and is true of at least some in EA, I hope though it’s not that true of many.
Where it does get tricky is potentially unilateral pivotal acts about which I think go in this direction but also feel different from what you describe.
I think this is an important point: one idea that is very easy to take away from the FTX and OpenAI situations is something like
Basically, I expect that the strategy of “attempt to get near the levers of power in order to be able to execute weird plans where, if the people in charge of the decision about whether to let you near the levers of power knew about your plans, they never would have let you near the levers of power in the first place” will work less well for EAs in the future. To the extent that EAs actually have a tendency to attempt those sorts of plans, it’s probably good that people are aware of that tendency.
But if you start from the premise of “EAs having more ability to influence the world is good, and the reason they have that ability is not relevant”, then this weekend was probably quite bad.
I am myself consequentialist at my core, but invoking consequentialism to justify breaking commitments, non-cooperation, theft, or whatever else is just a stupid bad policy (the notion of people doing this generates some strong emotions for me) that as a policy/algorithm, won’t result in accomplishing one’s consequentialist goals.
I fear what you say is not wholly inaccurate and is true of at least some in EA, I hope though it’s not that true of many.
Where it does get tricky is potentially unilateral pivotal acts about which I think go in this direction but also feel different from what you describe.