I followed the aftermath of FTX and the trial quite closely and I agree with your takes.
Also +1 to mentioning the suspiciousness around Alameda’s dealings with tether. It’s weird that this doesn’t get talked about much, so far.
On the parts of your post that contain criticism of EA:
We are taking many of the brightest young people. We are telling them to orient themselves as utility maximizers with scope sensitivity, willing to deploy instrumental convergence. Taught by modern overprotective society to look for rules they can follow so that they can be blameless good people, they are offered a set of rules that tells them to plan their whole lives around sacrifices on an alter, with no limit to the demand for such sacrifices. And then, in addition to telling them to in turn recruit more people to and raise more money for the cause, we point them into the places they can earn the best ‘career capital’ or money or ‘do the most good,’ which more often than not have structures that systematically destroy these people’s souls. SBF was a special case. He among other things, and in his own words, did not have a soul to begin with. But various versions of this sort of thing are going to keep happening, if we do not learn to ground ourselves in real (virtue?!) ethics, in love of the world and its people.
[...]
Was there a reckoning, a post-mortem, an update, for those who need one? Somewhat. Not anything like enough. There was a rush to deontology that died away quickly, mostly retreating back into its special enclave of veganism. There were general recriminations. There were lots of explicit statements that no, of course we did not mean that and of course we do not endorse any of that, no one should be doing any of that. And yes, I think everyone means it. But it’s based on, essentially, unprincipled hacks on top of the system, rather than fixing the root problem, and the smartest kids in the world are going to keep noticing this. We need to instead dig into the root causes, to design systems and find ways of being that do not need such hacks, while still preserving what makes such real efforts to seek truth and change the world for the better special in the first place.
Interesting take! I’m curious to follow the discussion around this that your post inspired.
I wish someone who is much better than me at writing things up in an intelligible and convincing fashion would make a post with some of the points I made here. In particular, I would like to see more EAs acknowledge that longtermism isn’t true in any direct sense, but rather, that it’s indirectly about the preferences of us as altruists (see the section, “Caring about the future: a flowchart”). Relatedly, EAs would probably be less fanatic about their particular brand of maximizing morality if they agreed that “What’s the right maximizing morality?” has several defensible answers, so those of us who make maximizing morality a part of their life goals shouldn’t feel like they’d have moral realism on their side when they consider overruling other people’s life goals. Respecting other people’s life goals, even if they don’t agree with your maximizing morality, is an ethical principle that’s at least as compelling/justified from a universalizing, altruistic stance, than any particular brand of maximizing consequentialism.
Great review and summary!
I followed the aftermath of FTX and the trial quite closely and I agree with your takes.
Also +1 to mentioning the suspiciousness around Alameda’s dealings with tether. It’s weird that this doesn’t get talked about much, so far.
On the parts of your post that contain criticism of EA:
Interesting take! I’m curious to follow the discussion around this that your post inspired.
I wish someone who is much better than me at writing things up in an intelligible and convincing fashion would make a post with some of the points I made here. In particular, I would like to see more EAs acknowledge that longtermism isn’t true in any direct sense, but rather, that it’s indirectly about the preferences of us as altruists (see the section, “Caring about the future: a flowchart”). Relatedly, EAs would probably be less fanatic about their particular brand of maximizing morality if they agreed that “What’s the right maximizing morality?” has several defensible answers, so those of us who make maximizing morality a part of their life goals shouldn’t feel like they’d have moral realism on their side when they consider overruling other people’s life goals. Respecting other people’s life goals, even if they don’t agree with your maximizing morality, is an ethical principle that’s at least as compelling/justified from a universalizing, altruistic stance, than any particular brand of maximizing consequentialism.