In that comment I was only offering plausible counter-arguments to “the amount of people that were hurt by FTX blowing up is a rounding error.”
How to model all the related factors is complicated. Saying that you easily know the right answer to whether the effects are negative or positive in expectation without running any numbers seems to me unjustified.
I think we basically agree here.
I’m in favour of more complicated models that include more indirect effects, not less.
Maybe the difference is: I think in the long run (over decades, including the actions of many EAs as influential as SBF) an EA movement that has strong norms against lying, corruption and fraud actually ends up more likely to save the world, even if it gets less funding in the short term.
The fact that I can’t predict and quantify ahead of time all the possible harms that result from fraud doesn’t convince me that those concerns are unjustified.
We might be living in a world where SBF stealing money and giving $50B to longtermist causes very quickly really is our best shot at preventing AI disaster, but I doubt it.
Apart from anything else I don’t think money is necessarily the most important bottleneck.
We already have an EA movement where the leading organization has no problem editing out elements of a picture it publishes on its website because of possible PR risks. While you can argue that it’s not literally lying it comes very close and suggests the kind of environment that does not have the strong norms that would be desirable
I don’t think FTX/Almeda doing this in secret strongly damaged general norms against lying, corruption, and fraud.
Them blowing up like this actually is a chance for moving toward those norms. It’s a chance to actually look into ethics in a different way to make it more clear that being honest and transparent is good.
Saying “poor messaging on our part” which resulted in “actions were negative in expectation in a purely utilitarian perspective” is a way to avoid having the actual conversation about the ethical norms that might produce change toward stronger norms for truth.
In that comment I was only offering plausible counter-arguments to “the amount of people that were hurt by FTX blowing up is a rounding error.”
I think we basically agree here.
I’m in favour of more complicated models that include more indirect effects, not less.
Maybe the difference is: I think in the long run (over decades, including the actions of many EAs as influential as SBF) an EA movement that has strong norms against lying, corruption and fraud actually ends up more likely to save the world, even if it gets less funding in the short term.
The fact that I can’t predict and quantify ahead of time all the possible harms that result from fraud doesn’t convince me that those concerns are unjustified.
We might be living in a world where SBF stealing money and giving $50B to longtermist causes very quickly really is our best shot at preventing AI disaster, but I doubt it.
Apart from anything else I don’t think money is necessarily the most important bottleneck.
We already have an EA movement where the leading organization has no problem editing out elements of a picture it publishes on its website because of possible PR risks. While you can argue that it’s not literally lying it comes very close and suggests the kind of environment that does not have the strong norms that would be desirable
I don’t think FTX/Almeda doing this in secret strongly damaged general norms against lying, corruption, and fraud.
Them blowing up like this actually is a chance for moving toward those norms. It’s a chance to actually look into ethics in a different way to make it more clear that being honest and transparent is good.
Saying “poor messaging on our part” which resulted in “actions were negative in expectation in a purely utilitarian perspective” is a way to avoid having the actual conversation about the ethical norms that might produce change toward stronger norms for truth.