If you believe that each future person is as valuable as each present person and there will be 10^100 people in the future lightcone, the amount of people that were hurt by FTX blowing up is a rounding error.
But you have to count the effect of the indirect harms on the future lightcone too. There’s a longtermist argument that SBF’s (alleged and currently very likely) crimes plausibly did more harm than all the wars and pandemics in history if...
Governments are now 10% less likely to cooperate with EAs on AI safety
The next 2 EA mega-donors decide to pass on EA
(Had he not been caught:) The EA movement drifted towards fraud and corruption
You are however only counting one side here. SBF appearing successful was a motivating example for others to start projects that would have made them Mega donors.
Governments are now 10% less likely to cooperate with EAs on AI safety
I don’t think that’s likely to be the case.
The next 2 EA mega-donors decide to pass on EA
There’s an unclearness here about what “pass on EA means”. Zvi wrote about Survival and Flourishing Fund not being an EA fund.
How to model all the related factors is complicated. Saying that you easily know the right answer to whether the effects are negative or positive in expectation without running any numbers seems to me unjustified.
In that comment I was only offering plausible counter-arguments to “the amount of people that were hurt by FTX blowing up is a rounding error.”
How to model all the related factors is complicated. Saying that you easily know the right answer to whether the effects are negative or positive in expectation without running any numbers seems to me unjustified.
I think we basically agree here.
I’m in favour of more complicated models that include more indirect effects, not less.
Maybe the difference is: I think in the long run (over decades, including the actions of many EAs as influential as SBF) an EA movement that has strong norms against lying, corruption and fraud actually ends up more likely to save the world, even if it gets less funding in the short term.
The fact that I can’t predict and quantify ahead of time all the possible harms that result from fraud doesn’t convince me that those concerns are unjustified.
We might be living in a world where SBF stealing money and giving $50B to longtermist causes very quickly really is our best shot at preventing AI disaster, but I doubt it.
Apart from anything else I don’t think money is necessarily the most important bottleneck.
We already have an EA movement where the leading organization has no problem editing out elements of a picture it publishes on its website because of possible PR risks. While you can argue that it’s not literally lying it comes very close and suggests the kind of environment that does not have the strong norms that would be desirable
I don’t think FTX/Almeda doing this in secret strongly damaged general norms against lying, corruption, and fraud.
Them blowing up like this actually is a chance for moving toward those norms. It’s a chance to actually look into ethics in a different way to make it more clear that being honest and transparent is good.
Saying “poor messaging on our part” which resulted in “actions were negative in expectation in a purely utilitarian perspective” is a way to avoid having the actual conversation about the ethical norms that might produce change toward stronger norms for truth.
But you have to count the effect of the indirect harms on the future lightcone too. There’s a longtermist argument that SBF’s (alleged and currently very likely) crimes plausibly did more harm than all the wars and pandemics in history if...
Governments are now 10% less likely to cooperate with EAs on AI safety
The next 2 EA mega-donors decide to pass on EA
(Had he not been caught:) The EA movement drifted towards fraud and corruption
etc.
You are however only counting one side here. SBF appearing successful was a motivating example for others to start projects that would have made them Mega donors.
I don’t think that’s likely to be the case.
There’s an unclearness here about what “pass on EA means”. Zvi wrote about Survival and Flourishing Fund not being an EA fund.
How to model all the related factors is complicated. Saying that you easily know the right answer to whether the effects are negative or positive in expectation without running any numbers seems to me unjustified.
In that comment I was only offering plausible counter-arguments to “the amount of people that were hurt by FTX blowing up is a rounding error.”
I think we basically agree here.
I’m in favour of more complicated models that include more indirect effects, not less.
Maybe the difference is: I think in the long run (over decades, including the actions of many EAs as influential as SBF) an EA movement that has strong norms against lying, corruption and fraud actually ends up more likely to save the world, even if it gets less funding in the short term.
The fact that I can’t predict and quantify ahead of time all the possible harms that result from fraud doesn’t convince me that those concerns are unjustified.
We might be living in a world where SBF stealing money and giving $50B to longtermist causes very quickly really is our best shot at preventing AI disaster, but I doubt it.
Apart from anything else I don’t think money is necessarily the most important bottleneck.
We already have an EA movement where the leading organization has no problem editing out elements of a picture it publishes on its website because of possible PR risks. While you can argue that it’s not literally lying it comes very close and suggests the kind of environment that does not have the strong norms that would be desirable
I don’t think FTX/Almeda doing this in secret strongly damaged general norms against lying, corruption, and fraud.
Them blowing up like this actually is a chance for moving toward those norms. It’s a chance to actually look into ethics in a different way to make it more clear that being honest and transparent is good.
Saying “poor messaging on our part” which resulted in “actions were negative in expectation in a purely utilitarian perspective” is a way to avoid having the actual conversation about the ethical norms that might produce change toward stronger norms for truth.