The analogy between SBF and Helen Toner is completely misguided. SBF did deeply immoral things, with catastrophic results for everyone, whatever his motivations has been. With Toner, we don’t know what really happened, but if she indeed was willing to destroy OpenAI for safety reasons, then AFAICT she was 100% justified. The only problem is that she didn’t succeed. (Where “success” would mean actually removing OpenAI from the gameboard, rather than e.g. rebranding it as part of Microsoft.)
There is certainly no moral equivalence between the two of them; SBF was a fraud and Toner was (from what I can tell) acting honestly according to her convictions. Sorry if I didn’t make that clear enough.
But I disagree about destroying OpenAI—that would have been a massive destruction of value and very far from justified IMO.
The analogy between SBF and Helen Toner is completely misguided. SBF did deeply immoral things, with catastrophic results for everyone, whatever his motivations has been. With Toner, we don’t know what really happened, but if she indeed was willing to destroy OpenAI for safety reasons, then AFAICT she was 100% justified. The only problem is that she didn’t succeed. (Where “success” would mean actually removing OpenAI from the gameboard, rather than e.g. rebranding it as part of Microsoft.)
Why would destroying OpenAI be positive for safety? I simply do not see any realistic arguments for that being the case.
There is certainly no moral equivalence between the two of them; SBF was a fraud and Toner was (from what I can tell) acting honestly according to her convictions. Sorry if I didn’t make that clear enough.
But I disagree about destroying OpenAI—that would have been a massive destruction of value and very far from justified IMO.
When negotiating it can be useful to be open to outcomes that are net destruction of value, even if the outcome is not what you ideally want.