Was there a reckoning, a post-mortem, an update, for those who need one? Somewhat. Not anything like enough.
I feel like you aren’t giving enough credit here (and possibly just underestimating the strength of the effect?) IMO the EA community has had a reckoning, a post-mortem, an update, etc. far more than most social or political movements would (and do) in response to similar misbehavior from a prominent member. And for sufficintly large groups of people, there is no reckoning at all, because there is safety in numbers—if a normal person commits a crime, other normal people who haven’t committed crimes yet don’t feel any pressure to be less normal.
I’m curious to operationalize forecasting questions based on this. Maybe something like “will there be another instance of a prominent EA committing fraud?”
What is this f***ing post-mortem? What was the root-cause analysis? Where is the list of changes that have been made to prevent an impulsive and immoral man like Sam taking tons of resources, talent and prestige from the Effective Altruism ecosystem and performing crimes of a magnitude for which a typical human lifetime is not long enough to make right? Was it due to the rapid growth beyond the movement’s ability to vet people? Was it due to people in leadership being afraid to investigate accusations of misbehavior? What was the cause here that has been fixed?
Please do not claim that things have been fixed without saying concretely what you believe has been fixed. I have seen far too many people continue roughly business as usual. It sickens me.
Do you agree with my comparative claim? EA vs. Democrats or Republicans, for example, or EA vs. Social Justice, or EA vs. idk pick some other analogous movement.
I could make a bigass list of EA forum and LW posts arguing about how to interpret what happened and lashing out with various bits of blame here and there. Pretty much all of the lessons/criticisms Zvi makes in this post have been made multiple times before. Including by e.g. Habryka, whom I respect greatly and admire for doing so. But I don’t feel motivated to make this list and link it here because I’m pretty sure you’ve read it all too; our disagreement is not about the list but whether the list is enough.
Notice, also, that I didn’t actually say “The problem is fixed.” I instead expressed doubt in the “not anything like enough” claim. I mused that it would be good to make some forecastable predictions. This was because I myself am unsure about what to think here. I have appreciated the discussions and attempted lesson-taking from the SBF disaster and I’m glad it’s happening & I support doing more of it.
[I feel like this conversation is getting somewhat heated btw; if you like I’d be happy to have a phone or video call or just plan to meet up in person some day if you like. This is not an ask, just an offer just in case you’d like that.]
I think your description of what happened with SBF is accurate, and I think that it is significantly less likely to happen again given how the SBF thing blew up and how people have reacted to it. (I take it we disagree about this.)
That said, I’m maybe not as plugged in to the EA community as I should be? idk. I’d be curious to hear what your concerns are—e.g. are there people who seem to you to be impulsive and immoral and on a path to gain in prestige and influence? Fair enough if you don’t want to say who they are (though I’d love that) but I’d be curious to hear whether you have specific people in mind or are just saying abstractly that we are still vulnerable to this failure mode since we haven’t fixed the root causes.
I think if I were to answer that question, I’d probably say something like “I know of one or two very sketchy people but they don’t seem to have much influence. Then there is Anthropic which seems to me to be a potential sbf-risk: lots of idealistic people, very mission-driven, lots of trust in leadership, making very important decisions about the fate of humanity. Could go very badly if leadership is bad in the ways SBF was bad. That said I don’t expect that to be the case & would be curious to get concrete and make forecasts and hear evidence. I currently think Anthropic is net-positive in expectation, which is saying a lot since it’s an AGI company and I think there’s somthing like a 70% chance of unaligned AGI takeover by the end of this decade.”
I certainly agree this is possible. Insofar as you think that’s not only possible but actual, then thanks, that’s a helpful clarification of your position. Had you said something like this above I probably wouldn’t have objected, at least not as strongly, and instead would have just asked for predictions.
Do you agree with my comparative claim? EA vs. Democrats or Republicans, for example, or EA vs. Social Justice, or EA vs. idk pick some other analogous movement.
What I expect for a movement of this scale or larger where a prominent figure has a scandal of this level, is that many people wring their hands over it, some minor changes, people taking lots of defensive PR actions, but nobody is in a position to really fix the underlying problems and it isn’t really tried. Some substantive status allocation changes and trust is lowered, and then it continues on regardless. I currently cannot distinguish the Effective Altruism ecosystem from this standard story. Beyond FTX, who has been fired?Who has stepped forward and taken responsibility? Who has admitted to major fault?
I suspect the main thing that has happened better in the EA ecosystem does less actively criminal or unethical behavior in the cover-up and in the PR defense, while not actually fixing anything. That is a low bar and this is still best described as “a failure”.
I also think any of those movements/ecosystems would have a ton of energy for discussion and finger-pointing and attempts to use this issue to change people’s status. Perhaps you are misreading “lots of bickering” as “there has been a reckoning”. The EA Forum is filled with squabbling of this sort and is a substantial reason for why I do not read it.
I am outraged, and I don’t know which emotion is stronger: my utter rage at Sam (and others?) for causing such harm to so many people, or my sadness and self-hatred for falling for this deception.
...
If FTX misused customer funds, then I personally will have much to reflect on. Sam and FTX had a lot of goodwill – and some of that goodwill was the result of association with ideas I have spent my career promoting. If that goodwill laundered fraud, I am ashamed.
As a community, too, we will need to reflect on what has happened, and how we could reduce the chance of anything like this from happening again. Yes, we want to make the world better, and yes, we should be ambitious in the pursuit of that.
But that in no way justifies fraud. If you think that you’re the exception, you’re duping yourself.
We must make clear that we do not see ourselves as above common-sense ethical norms, and must engage criticism with humility.
I know that others from inside and outside of the community have worried about the misuse of EA ideas in ways that could cause harm. I used to think these worries, though worth taking seriously, seemed speculative and unlikely.
I was probably wrong. I will be reflecting on this in the days and months to come, and thinking through what should change.
I read this as an admission of guilt and responsibility. What do you wish he had said?
I read this as an admission of guilt and responsibility. What do you wish he had said?
Does it matter what he said? What has he done? As far as I’m aware he is mostly getting along with being a prominent figurehead of EA and a public intellectual.
Also this is hardly an admission of guilt. It primarily says “This seems bad and I will reflect on it.” He didn’t say
“This theft of many thousands of people’s life savings will forever be part of the legacy of Effective Altruism, and I must ensure that this movement is not responsible for something even worse in the future. I take responsibility for endorsing and supporting this awful person and for playing a key role in building an ecosystem in which he thrived. I have failed in my leadership position and I will work to make sure this great injustice cannot happen again and that the causes are rectified, and if I cannot accomplish that with confidence within 12 months then I will no longer publicly support the Effective Altruism movement.”
I read this as an admission of guilt and responsibility. What do you wish he had said?
I think it’s a decent opening and it clearly calls for reflection, but you might notice that indeed no further reflection has been published, and Will has not published anything that talks much about what lessons he has taken away from them.
To be clear, as I understand the situation Will did indeed write up a bunch of reflections, but then the EV board asked him not to because that posed too much legal and PR risk. I agree this is some evidence about Will showing some remorse, but also evidence that the overall leadership does not care very much about people learning from what happened (at least compared to increased PR and legal risk).
I think this is a potentially large cost of the fiscal sponsorship umbrella. Will can’t take on the risk personally or even for just his org, it’s automatically shared with a ton of other orgs.
That seems quite plausible. If that is his reasoning, then I think he should say that.
“I had planned to write in more details about my relationship to Sam and FTX, what actions I took, and in what ways I think my actions did and did not enable these crimes to take place; but due to concerns about risking the jobs of 100+ people I have chosen to not share information about this for the following 1-4 years (that is, until any legal and financial investigation of Effective Ventures has concluded, an org that I’m on the board of and that facilitated a lot of financial grantmaking for FTX).
This obviously largely prohibits the Effective Altruism ecosystem from carrying out a collective fact-finding effort around those who were closely involved with Sam and FTX within the next 1-4 years, and substantially obstructs a clear fault analysis and post-mortem from occurring, and I expect as a result of this many readers should correctly update that by-default that the causes of these problems will not be fixed.
I hope that this is not the death of the Effective Altruism ecosystem that I have worked to build over the last 10+ years, but I am not sure how people working and living in this ecosystem can come to trust that crimes of a similar magnitude will not happen again after seeing little-to-no accounting of how this criminal was funded and supported, nor any clear fixes implemented in the ecosystem to prevent such crimes from occurring in the future, and I sadly expect many good people will rightly leave the ecosystem because of it.”
I wish he had said (perhaps after some time to ponder) “I now realize that SBF used FTX to steal customer funds. SBF and FTX had a lot of goodwill, that I contributed to, and I let those people and the entire community down.
As a community, we need to recognize that this happened in part because of us. And I recognize that this happened partly because of me, in particular. Yes, we want to make the world better, and yes, we should be ambitious in the pursuit of that. But we have been doing so in a way that we can now see can set people on extremely dark and destructive paths.
No promise to do good justifies fraud, or the encouragement of fraud. We have to find a philosophy that does not drive people towards fraud.
We must not see or treat ourselves as above common-sense ethical norms, and must engage criticism with humility. We must fundamentally rethink how to embody utilitarianism where it is useful, within such a framework, recognizing that saying ‘but don’t lie or do fraud’ at the end often does not work.
I know others have worried that our formulation of EA ideas could lead people to do harm. I used to think this was unlikely. I now realize it was not, and that this was part of a predictable pattern that we must end, so that we can be a force for good once more.
I was wrong. I will continue to reflect in the coming months.”
And then, ya know, reflect, and do some things.
The statement he actually made I interpret as a plea for time to process while affirming the bare minimum. Where was his follow-up?
Your proposal seems to me to be pretty similar to what he actually said, just a bit stronger here and there. Ben’s proposal below, by contrast, is much stiffer stuff, mostly because of the last sentence.
Can you give an example of a movement of this scale or larger that had a scandal of this level, and reacted better than EA did?
None come to mind. (To be clear, this doesn’t seem cruxy for whether Effective Altruism has succeeded at reforming itself.)
I think instructive examples to look into would be things like:
How the justice system itself investigates crimes. I really like reading published reports where an investigator has been given a lot of resources to figure something out and then writes up what they learned. In many countries it is illegal to lie to an investigator when they are investigating a crime, which means that someone can go around and just ask what happened, then share that and prosecute any unlawful behavior.
How countries deal with their own major human rights violations. I am somewhat interested in understanding things like how the Truth and Reconciliation process went in South Africa, and also how Germany has responded post WWII, where I think both really tried to reform to ensure that the same thing couldn’t happen again.
How companies investigate disasters. Sometimes a massive company will have a disaster or screw-up (e.g. the BP Oil Spill, the Boeing crashes, Johnson & Johnson Tylenol poisoning incident) and sometimes conduct serious investigations and try to fix the problem. I’d be interested in reading successful accounts there and how they went about finding the source of the problem and fixing it.
Religious reformations. The Protestant split was in response to a bunch of theological and pragmatic disagreements and also concerns of corruption (the clergy leading lavish lives). I’d prefer to not have a split and instead have a reform, I suspect there are other instances of major religious reform that went well that one can learn lessons from (of course also many to avoid).
Can you give an example of a movement of this scale or larger that had a scandal of this level, and reacted better than EA did?
I think almost any large organization/company would have gone through a much more comprehensive fault-analysis and would have made many measurable improvements. For examples of this you could read through the history of Apple, or Tesla, or TSMC, or Intel. You could also look into the reforms that happened to lots of investment banks post 2008.
Companies are different than social movements, though my sense is that in the history of religion there have also been many successful reform efforts in response to various crises, which seems more similar.
As another interesting example, it also seems to me that Germany pretty successfully reformed its government and culture post World-War 2.
I think Germany is an extreme outlier here fwiw, (eg) Japan did far worse things and after WW2 cared more about covering up wrongdoing than with admitting fault; further, Germany’s government and cultural “reformation” was very much strongarmed by the US and other allies, whereas the US actively assisted Japan in covering up war crimes.
I started writing a comment reply to elaborate after getting some disagreevotes on the parent comment, but decided that it’d be a distraction from the main conversation; I might expand on my position in an LW shortform at some point in the near future.
I think almost any large organization/company would have gone through a much more comprehensive fault-analysis and would have made many measurable improvements.
Notice, also, that I didn’t actually say “The problem is fixed.” I instead expressed doubt in the “not anything like enough” claim. I mused that it would be good to make some forecastable predictions. This was because I myself am unsure about what to think here. I have appreciated the discussions and attempted lesson-taking from the SBF disaster and I’m glad it’s happening & I support doing more of it.
I feel like you’re implicitly saying that anything has really changed! I am finding it hard to think of a world where less would have changed after a scandal this big.
I think your description of what happened with SBF is accurate, and I think that it is significantly less likely to happen again given how the SBF thing blew up and how people have reacted to it.
It is common the case that the exact same failure will not repeat itself. I think that the broader civilization does not have the skill of avoiding the same thing from happening (e.g. if a second covid came along I do not much expect that civilization would do more than 2x better the second time around, whereas I think one could obviously do 10x-30x better) and so the Effective Altruism movement is doing less dysfunctionally on this measure, in that there will probably not be another $8B crypto fraud. I think this is primarily just because many people have rightly lost trust in the Effective Altruism ecosystem and will not support it as much, but not because the underlying generators that were untrustworthy have been fixed.
I mused that it would be good to make some forecastable predictions.
I don’t know how to operationalize things here! I think there are underlying generators that give a lot of power and respect to people without vetting them or caring about obvious ways in which they are low-integrity, un-principled, and unethical. Most Effective Altruism orgs are in the non-profit sector. I think most people involved will not have the opportunity to have their low ethical standards be displayed so undeniably as someone involved in a crypto scam, and I do not expect there is going to be consensus about other scandals in the way there is about this one. So I don’t really know what to forecast, other than “a list of people you and I both consider high-integrity will stop participating in the Effective Altruism movement and ecosystem within the next 5 years”, but that’s a fairly indirect measure.
I think future catastrophes will also not look the same as past catastrophes because a lot of the underlying ecosystem has changed (number of people, amount of money, growth of AI, etc). That’s another reason why it’s hard to predict things.
I think there are underlying generators that give a lot of power and respect to people without vetting them or caring about obvious ways in which they are low-integrity, un-principled, and unethical. … I think most people involved will not have the opportunity to have their low ethical standards be displayed so undeniably...
Would you like to say more about this? I’m curious if there are examples you can talk about publicly.
I feel like this conversation is getting somewhat heated btw; if you like I’d be happy to have a phone or video call or just plan to meet up in person some day if you like. This is not an ask, just an offer just in case you’d like that.
Thanks. I don’t feel much in the way of anger toward you personally, I’m primarily angry about the specific analysis of the situation that you are using here (and which I expect many others share). I still like you personally and respect a bunch of your writing on AI takeoff (and more). I don’t currently feel like asking you to talk about this offline. (I’d be open to dialoguing more about it if you wanted that because I like discussing things in dialogue in general, but I’m not asking for that.)
I guess I should know better by now, but it still astonishes me that EAs can set such abysmally low standards for themselves while simultaneously representing themselves as dramatically more ethical than everyone else.
...compared to what? Seriously what groups of people are you comparing to? Among the people in my extended network who see themselves as altruists, EAs seem to hold themselves and each other to the highest standards, and also seem to actually be more ethical than the rest. My extended network consists of tech company workers, academics, social justice types, and EAs. (Well and rationalists too, I’m not counting them.)
I agree this is a low bar in some absolute sense—and there are definitely social movements in the world today (especially religious ones) that are better in both dimensions. There’s a lot of room for improvement. And I very much support these criticisms and attempts at reform. But I’m just calling it like I see it here; it would be dishonest grandstanding of me to say the sentence Zvi wrote in the OP, at least not without giving additional context.
I think most organizations the size of EA have formal accountability mechanisms that attempt to investigate claims of fraud and abuse in some kind of more-or-less fair and transparent way. Of course, the actual fairness and effectiveness of such mechanisms can be debated, but at least the need for them is acknowledged. The attitude among EAs, on the other hand, seems to be that EAs are all too smart and good to have any real need for accountability.
IMO the EA community has had a reckoning, a post-mortem, an update, etc. far more than most social or political movements would (and do) in response to similar misbehavior from a prominent member
As a reference point: fraud seems fairly common in ycombinator backed companies, but I can’t find any sort of postmortem, even about major things like uBiome where the founders are literally fugitives from the FBI.
It seems like you could tell a fairly compelling story that YC pushing founders to pursue risky strategies and flout rules is upstream of this level of fraudulent behavior, though I haven’t investigated closely.
My guess is that they just kind of accept that their advice to founders is just going to backfire 1-2% of the time.
I would be ecstatic to learn that only 2% of Y-Combinator companies that ever hit $100mm were engaged in serious fraud, and presume the true number is far higher.
And yes, YC does do that and Matt Levine frequently talks about the optimal amount of fraud (from the perspective of a VC) being not zero. For them, this is a feature, not a bug, up to a (very high) point.
I would hope we would feel differently, and also EA/rationality has had (checks notes) zero companies/people bigger than FTX/SBF unless you count any of Anthropic, OpenAI and DeepMind. In which case, well, other issues, and perhaps other types of fraud.
The total net-fraud from YC companies seems substantially smaller than the total net-fraud from EA efforts, and I think a lot more people have been involved with YC than EAs, so I don’t really think this comparison goes through.
Like EA has defrauded much more money than we’ve ever donated or built in terms of successful companies. Total non-fradulent valuations of YC companies are in the hundreds of billions, whereas total fraud is is maybe in the $1B range? That seems like a much more acceptable ratio of fraud to value produced.
Also, I don’t think it makes sense to characterize FTX’s theft of customer funds as “EA defrauding people”. SBF spent around $100 million on charitable causes and billions on VC investments, celebrity promotions, interest payments to crypto lenders, bahamas real estate, and a bunch of other random crap. And Alameda lost a bunch more buying shitcoins that crashed.
To say that EA defrauded people because FTX lost money is to say that of the 8 billion or whatever Alameda was short, the $100 million spent on EA priorities is somehow responsible for the other 7.9 billion. It just doesn’t make any sense.
I think it makes sense to say “EAs defrauded people”. Sam was clearly an EA, and he mostly defrauded people in pursuit of an EA mission, which he thought was best optimized by increasing the valuation of FTX.
Virtually no one in EA would have approved of the manner by which Sam sought to make FTX more valuable. So I guess I don’t really see it as a failure of the EA movement or its morals. If someone is part of a movement and does something that the movement is explicitly against, is it the movements fault?
I also don’t think people put their money in FTX because they wanted to help EA. They mostly put money in FTX because they believed it was a reputable exchange (whether that was because it was endorsed by Tom Brady or Steph Curry or any number of other people) and because they wanted to make money on Crypto.
Virtually no one in EA would have approved of the manner by which Sam sought to make FTX more valuable.
I talked to many people about Sam doing shady things before FTX collapsed. Many people definitely endorsed those things. I don’t think they endorsed stealing customer deposits, though honestly, my guess is a good chunk of people would have endorsed it if that wouldn’t have resulted in everything exploding (and if it was just like a temporary dip into customer deposits).
I don’t understand the second paragraph. Yes, Sam tricked people into depositing money onto his exchange, which he then used to fund a bunch of schemes, mostly motivated via EA and with the leadership team being substantially populated by EA people. Of course the customers didn’t want to help EA, that’s what made it a fraud. My guess is I am misunderstanding something you are trying to communicate.
Well, it’s more “you steal 8 billion dollars and gamble them on all-or-nothing bets where if you win you are planning to spend them on EA”. I think that totally counts as EA.
Like, Sam spent that money in the hopes of growing FTX, and the was building FTX for earning to give reasons.
That is an interesting number, however I think it’s a bit unclear how to think about defrauding here. If you steal $1000 dollars, and then I sue you and get that money back, it’s not like you “stole zero dollars”.
I agree it matters how much is recoverable, but most of the damage from FTX is not about the lost deposits specifically anyways, and I think the correct order of magnitude of the real costs here is probably greater than the money that was defrauded, though I think reasonable people can disagree on the number here. Similarly I think when you steal a $1000 bike from me, even if I get it back, the economic damage that you introduced is probably roughly on the order of the cost of the bike.
I also don’t believe the $1.8B number. I’ve been following the reports around this very closely and every few weeks some news article claims vastly different fractions of funds have been recovered. While not a perfect estimator, I’ve been using the price at which FTX bankruptcy claims are trading at, which I think is currently at around 60%, suggesting more like $4B missing (claims of Alameda Research are trading at 15%, driving that number down further, but I don’t know what fraction of the liabilities were Alameda claims).
Yep that’s fair, there is some subjectivity here. I was hoping that the charges from SDNY would have a specific amount that Sam was alleged to have defrauded, but they don’t seem to.
Regarding $4B missing: adding in Anthropic gets another $4B on the EA side of the ledger, and founders pledge another $1B. The value produced by Anthropic is questionable, and maybe negative of course, but I think by the strict definition of “donated or built in terms of successful companies” EA comes out ahead.
(And OpenAI gets another $80B, so if you count that then I think even the most aggressive definition of how much FTX defrauded is smaller. But obviously OAI’s EA credentials are dubious.)
Regarding $4B missing: adding in Anthropic gets another $4B on the EA side of the ledger, and founders pledge another $1B.
Well, I mean, I think making money off of building doomsday machines goes on the cost side of the ledger, but I do think it applies to the specific point I made above and I think that’s fair. Anthropic is quite successful at a scale that is not that incomparable to the size of the FTX fraud.
I feel like you aren’t giving enough credit here (and possibly just underestimating the strength of the effect?) IMO the EA community has had a reckoning, a post-mortem, an update, etc. far more than most social or political movements would (and do) in response to similar misbehavior from a prominent member. And for sufficintly large groups of people, there is no reckoning at all, because there is safety in numbers—if a normal person commits a crime, other normal people who haven’t committed crimes yet don’t feel any pressure to be less normal.
I’m curious to operationalize forecasting questions based on this. Maybe something like “will there be another instance of a prominent EA committing fraud?”
What is this f***ing post-mortem? What was the root-cause analysis? Where is the list of changes that have been made to prevent an impulsive and immoral man like Sam taking tons of resources, talent and prestige from the Effective Altruism ecosystem and performing crimes of a magnitude for which a typical human lifetime is not long enough to make right? Was it due to the rapid growth beyond the movement’s ability to vet people? Was it due to people in leadership being afraid to investigate accusations of misbehavior? What was the cause here that has been fixed?
Please do not claim that things have been fixed without saying concretely what you believe has been fixed. I have seen far too many people continue roughly business as usual. It sickens me.
Do you agree with my comparative claim? EA vs. Democrats or Republicans, for example, or EA vs. Social Justice, or EA vs. idk pick some other analogous movement.
I could make a bigass list of EA forum and LW posts arguing about how to interpret what happened and lashing out with various bits of blame here and there. Pretty much all of the lessons/criticisms Zvi makes in this post have been made multiple times before. Including by e.g. Habryka, whom I respect greatly and admire for doing so. But I don’t feel motivated to make this list and link it here because I’m pretty sure you’ve read it all too; our disagreement is not about the list but whether the list is enough.
Notice, also, that I didn’t actually say “The problem is fixed.” I instead expressed doubt in the “not anything like enough” claim. I mused that it would be good to make some forecastable predictions. This was because I myself am unsure about what to think here. I have appreciated the discussions and attempted lesson-taking from the SBF disaster and I’m glad it’s happening & I support doing more of it.
[I feel like this conversation is getting somewhat heated btw; if you like I’d be happy to have a phone or video call or just plan to meet up in person some day if you like. This is not an ask, just an offer just in case you’d like that.]
I think your description of what happened with SBF is accurate, and I think that it is significantly less likely to happen again given how the SBF thing blew up and how people have reacted to it. (I take it we disagree about this.)
That said, I’m maybe not as plugged in to the EA community as I should be? idk. I’d be curious to hear what your concerns are—e.g. are there people who seem to you to be impulsive and immoral and on a path to gain in prestige and influence? Fair enough if you don’t want to say who they are (though I’d love that) but I’d be curious to hear whether you have specific people in mind or are just saying abstractly that we are still vulnerable to this failure mode since we haven’t fixed the root causes.
I think if I were to answer that question, I’d probably say something like “I know of one or two very sketchy people but they don’t seem to have much influence. Then there is Anthropic which seems to me to be a potential sbf-risk: lots of idealistic people, very mission-driven, lots of trust in leadership, making very important decisions about the fate of humanity. Could go very badly if leadership is bad in the ways SBF was bad. That said I don’t expect that to be the case & would be curious to get concrete and make forecasts and hear evidence. I currently think Anthropic is net-positive in expectation, which is saying a lot since it’s an AGI company and I think there’s somthing like a 70% chance of unaligned AGI takeover by the end of this decade.”
I don’t feel confident about any of this.
I think the following can be and are both true at once:
What happened was not anything like enough.
What happened was more than one would expect from a political party, or a social movement such as social justice.
I certainly agree this is possible. Insofar as you think that’s not only possible but actual, then thanks, that’s a helpful clarification of your position. Had you said something like this above I probably wouldn’t have objected, at least not as strongly, and instead would have just asked for predictions.
What I expect for a movement of this scale or larger where a prominent figure has a scandal of this level, is that many people wring their hands over it, some minor changes, people taking lots of defensive PR actions, but nobody is in a position to really fix the underlying problems and it isn’t really tried. Some substantive status allocation changes and trust is lowered, and then it continues on regardless. I currently cannot distinguish the Effective Altruism ecosystem from this standard story. Beyond FTX, who has been fired? Who has stepped forward and taken responsibility? Who has admitted to major fault?
I suspect the main thing that has happened better in the EA ecosystem does less actively criminal or unethical behavior in the cover-up and in the PR defense, while not actually fixing anything. That is a low bar and this is still best described as “a failure”.
I also think any of those movements/ecosystems would have a ton of energy for discussion and finger-pointing and attempts to use this issue to change people’s status. Perhaps you are misreading “lots of bickering” as “there has been a reckoning”. The EA Forum is filled with squabbling of this sort and is a substantial reason for why I do not read it.
That’s helpful thanks. Can you give an example of a movement of this scale or larger that had a scandal of this level, and reacted better than EA did?
IIRC Will MacAskill admitted to major fault, though I don’t remember what he said and wasn’t paying close attention. Here’s the statement I remembered: A personal statement on FTX — EA Forum (effectivealtruism.org)
I read this as an admission of guilt and responsibility. What do you wish he had said?
Does it matter what he said? What has he done? As far as I’m aware he is mostly getting along with being a prominent figurehead of EA and a public intellectual.
Also this is hardly an admission of guilt. It primarily says “This seems bad and I will reflect on it.” He didn’t say
I think it’s a decent opening and it clearly calls for reflection, but you might notice that indeed no further reflection has been published, and Will has not published anything that talks much about what lessons he has taken away from them.
To be clear, as I understand the situation Will did indeed write up a bunch of reflections, but then the EV board asked him not to because that posed too much legal and PR risk. I agree this is some evidence about Will showing some remorse, but also evidence that the overall leadership does not care very much about people learning from what happened (at least compared to increased PR and legal risk).
I think this is a potentially large cost of the fiscal sponsorship umbrella. Will can’t take on the risk personally or even for just his org, it’s automatically shared with a ton of other orgs.
That seems quite plausible. If that is his reasoning, then I think he should say that.
Pretty big if true. If EV actively is censoring attempts to reflect upon what happened, then that is important information to pin down.
I would hope that if someone tried to do that to me, I would resign.
That’s what I told Will to do. He felt like that would be uncollaborative with broader EA leadership.
I wish he had said (perhaps after some time to ponder) “I now realize that SBF used FTX to steal customer funds. SBF and FTX had a lot of goodwill, that I contributed to, and I let those people and the entire community down.
As a community, we need to recognize that this happened in part because of us. And I recognize that this happened partly because of me, in particular. Yes, we want to make the world better, and yes, we should be ambitious in the pursuit of that. But we have been doing so in a way that we can now see can set people on extremely dark and destructive paths.
No promise to do good justifies fraud, or the encouragement of fraud. We have to find a philosophy that does not drive people towards fraud.
We must not see or treat ourselves as above common-sense ethical norms, and must engage criticism with humility. We must fundamentally rethink how to embody utilitarianism where it is useful, within such a framework, recognizing that saying ‘but don’t lie or do fraud’ at the end often does not work.
I know others have worried that our formulation of EA ideas could lead people to do harm. I used to think this was unlikely. I now realize it was not, and that this was part of a predictable pattern that we must end, so that we can be a force for good once more.
I was wrong. I will continue to reflect in the coming months.”
And then, ya know, reflect, and do some things.
The statement he actually made I interpret as a plea for time to process while affirming the bare minimum. Where was his follow-up?
Your proposal seems to me to be pretty similar to what he actually said, just a bit stronger here and there. Ben’s proposal below, by contrast, is much stiffer stuff, mostly because of the last sentence.
None come to mind. (To be clear, this doesn’t seem cruxy for whether Effective Altruism has succeeded at reforming itself.)
I think instructive examples to look into would be things like:
How the justice system itself investigates crimes. I really like reading published reports where an investigator has been given a lot of resources to figure something out and then writes up what they learned. In many countries it is illegal to lie to an investigator when they are investigating a crime, which means that someone can go around and just ask what happened, then share that and prosecute any unlawful behavior.
How countries deal with their own major human rights violations. I am somewhat interested in understanding things like how the Truth and Reconciliation process went in South Africa, and also how Germany has responded post WWII, where I think both really tried to reform to ensure that the same thing couldn’t happen again.
How companies investigate disasters. Sometimes a massive company will have a disaster or screw-up (e.g. the BP Oil Spill, the Boeing crashes, Johnson & Johnson Tylenol poisoning incident) and sometimes conduct serious investigations and try to fix the problem. I’d be interested in reading successful accounts there and how they went about finding the source of the problem and fixing it.
Religious reformations. The Protestant split was in response to a bunch of theological and pragmatic disagreements and also concerns of corruption (the clergy leading lavish lives). I’d prefer to not have a split and instead have a reform, I suspect there are other instances of major religious reform that went well that one can learn lessons from (of course also many to avoid).
I think almost any large organization/company would have gone through a much more comprehensive fault-analysis and would have made many measurable improvements. For examples of this you could read through the history of Apple, or Tesla, or TSMC, or Intel. You could also look into the reforms that happened to lots of investment banks post 2008.
Companies are different than social movements, though my sense is that in the history of religion there have also been many successful reform efforts in response to various crises, which seems more similar.
As another interesting example, it also seems to me that Germany pretty successfully reformed its government and culture post World-War 2.
I think Germany is an extreme outlier here fwiw, (eg) Japan did far worse things and after WW2 cared more about covering up wrongdoing than with admitting fault; further, Germany’s government and cultural “reformation” was very much strongarmed by the US and other allies, whereas the US actively assisted Japan in covering up war crimes.
EDIT: See shortform elaboration: https://www.lesswrong.com/posts/s58hDHX2GkFDbpGKD/linch-s-shortform?commentId=ywf8R3CobzdkbTx3d
Here are some notes on why I think Imperial Japan was unusually bad, even by the very low bar set by the Second World War.
Curious why you say “far worse” rather than “similarly bad” though this isn’t important to the main conversation.
I started writing a comment reply to elaborate after getting some disagreevotes on the parent comment, but decided that it’d be a distraction from the main conversation; I might expand on my position in an LW shortform at some point in the near future.
Update: OK, now I agree. I encourage you to make a post on it.
I claim YCombinator is a counter example.
(The existence of one counterexample obviously doesn’t disagree with the “almost any” claim.)
I feel like you’re implicitly saying that anything has really changed! I am finding it hard to think of a world where less would have changed after a scandal this big.
It is common the case that the exact same failure will not repeat itself. I think that the broader civilization does not have the skill of avoiding the same thing from happening (e.g. if a second covid came along I do not much expect that civilization would do more than 2x better the second time around, whereas I think one could obviously do 10x-30x better) and so the Effective Altruism movement is doing less dysfunctionally on this measure, in that there will probably not be another $8B crypto fraud. I think this is primarily just because many people have rightly lost trust in the Effective Altruism ecosystem and will not support it as much, but not because the underlying generators that were untrustworthy have been fixed.
I don’t know how to operationalize things here! I think there are underlying generators that give a lot of power and respect to people without vetting them or caring about obvious ways in which they are low-integrity, un-principled, and unethical. Most Effective Altruism orgs are in the non-profit sector. I think most people involved will not have the opportunity to have their low ethical standards be displayed so undeniably as someone involved in a crypto scam, and I do not expect there is going to be consensus about other scandals in the way there is about this one. So I don’t really know what to forecast, other than “a list of people you and I both consider high-integrity will stop participating in the Effective Altruism movement and ecosystem within the next 5 years”, but that’s a fairly indirect measure.
I think future catastrophes will also not look the same as past catastrophes because a lot of the underlying ecosystem has changed (number of people, amount of money, growth of AI, etc). That’s another reason why it’s hard to predict things.
Would you like to say more about this? I’m curious if there are examples you can talk about publicly.
Thanks. I don’t feel much in the way of anger toward you personally, I’m primarily angry about the specific analysis of the situation that you are using here (and which I expect many others share). I still like you personally and respect a bunch of your writing on AI takeoff (and more). I don’t currently feel like asking you to talk about this offline. (I’d be open to dialoguing more about it if you wanted that because I like discussing things in dialogue in general, but I’m not asking for that.)
I guess I should know better by now, but it still astonishes me that EAs can set such abysmally low standards for themselves while simultaneously representing themselves as dramatically more ethical than everyone else.
...compared to what? Seriously what groups of people are you comparing to? Among the people in my extended network who see themselves as altruists, EAs seem to hold themselves and each other to the highest standards, and also seem to actually be more ethical than the rest. My extended network consists of tech company workers, academics, social justice types, and EAs. (Well and rationalists too, I’m not counting them.)
I agree this is a low bar in some absolute sense—and there are definitely social movements in the world today (especially religious ones) that are better in both dimensions. There’s a lot of room for improvement. And I very much support these criticisms and attempts at reform. But I’m just calling it like I see it here; it would be dishonest grandstanding of me to say the sentence Zvi wrote in the OP, at least not without giving additional context.
I think most organizations the size of EA have formal accountability mechanisms that attempt to investigate claims of fraud and abuse in some kind of more-or-less fair and transparent way. Of course, the actual fairness and effectiveness of such mechanisms can be debated, but at least the need for them is acknowledged. The attitude among EAs, on the other hand, seems to be that EAs are all too smart and good to have any real need for accountability.
As a reference point: fraud seems fairly common in ycombinator backed companies, but I can’t find any sort of postmortem, even about major things like uBiome where the founders are literally fugitives from the FBI.
It seems like you could tell a fairly compelling story that YC pushing founders to pursue risky strategies and flout rules is upstream of this level of fraudulent behavior, though I haven’t investigated closely.
My guess is that they just kind of accept that their advice to founders is just going to backfire 1-2% of the time.
I would be ecstatic to learn that only 2% of Y-Combinator companies that ever hit $100mm were engaged in serious fraud, and presume the true number is far higher.
And yes, YC does do that and Matt Levine frequently talks about the optimal amount of fraud (from the perspective of a VC) being not zero. For them, this is a feature, not a bug, up to a (very high) point.
I would hope we would feel differently, and also EA/rationality has had (checks notes) zero companies/people bigger than FTX/SBF unless you count any of Anthropic, OpenAI and DeepMind. In which case, well, other issues, and perhaps other types of fraud.
Oh yeah, just because it’s a reference point that doesn’t mean that we should copy them
The total net-fraud from YC companies seems substantially smaller than the total net-fraud from EA efforts, and I think a lot more people have been involved with YC than EAs, so I don’t really think this comparison goes through.
Like EA has defrauded much more money than we’ve ever donated or built in terms of successful companies. Total non-fradulent valuations of YC companies are in the hundreds of billions, whereas total fraud is is maybe in the $1B range? That seems like a much more acceptable ratio of fraud to value produced.
FTX is missing $1.8B. OpenPhil has donated $2.8B.
Also, I don’t think it makes sense to characterize FTX’s theft of customer funds as “EA defrauding people”. SBF spent around $100 million on charitable causes and billions on VC investments, celebrity promotions, interest payments to crypto lenders, bahamas real estate, and a bunch of other random crap. And Alameda lost a bunch more buying shitcoins that crashed.
To say that EA defrauded people because FTX lost money is to say that of the 8 billion or whatever Alameda was short, the $100 million spent on EA priorities is somehow responsible for the other 7.9 billion. It just doesn’t make any sense.
I think it makes sense to say “EAs defrauded people”. Sam was clearly an EA, and he mostly defrauded people in pursuit of an EA mission, which he thought was best optimized by increasing the valuation of FTX.
Virtually no one in EA would have approved of the manner by which Sam sought to make FTX more valuable. So I guess I don’t really see it as a failure of the EA movement or its morals. If someone is part of a movement and does something that the movement is explicitly against, is it the movements fault?
I also don’t think people put their money in FTX because they wanted to help EA. They mostly put money in FTX because they believed it was a reputable exchange (whether that was because it was endorsed by Tom Brady or Steph Curry or any number of other people) and because they wanted to make money on Crypto.
I talked to many people about Sam doing shady things before FTX collapsed. Many people definitely endorsed those things. I don’t think they endorsed stealing customer deposits, though honestly, my guess is a good chunk of people would have endorsed it if that wouldn’t have resulted in everything exploding (and if it was just like a temporary dip into customer deposits).
I don’t understand the second paragraph. Yes, Sam tricked people into depositing money onto his exchange, which he then used to fund a bunch of schemes, mostly motivated via EA and with the leadership team being substantially populated by EA people. Of course the customers didn’t want to help EA, that’s what made it a fraud. My guess is I am misunderstanding something you are trying to communicate.
A simpler way to phrase my question is “If you steal 8 billion and spend 7.9 billion on non-EA things, did you really do it for EA?”
Well, it’s more “you steal 8 billion dollars and gamble them on all-or-nothing bets where if you win you are planning to spend them on EA”. I think that totally counts as EA.
Like, Sam spent that money in the hopes of growing FTX, and the was building FTX for earning to give reasons.
That is an interesting number, however I think it’s a bit unclear how to think about defrauding here. If you steal $1000 dollars, and then I sue you and get that money back, it’s not like you “stole zero dollars”.
I agree it matters how much is recoverable, but most of the damage from FTX is not about the lost deposits specifically anyways, and I think the correct order of magnitude of the real costs here is probably greater than the money that was defrauded, though I think reasonable people can disagree on the number here. Similarly I think when you steal a $1000 bike from me, even if I get it back, the economic damage that you introduced is probably roughly on the order of the cost of the bike.
I also don’t believe the $1.8B number. I’ve been following the reports around this very closely and every few weeks some news article claims vastly different fractions of funds have been recovered. While not a perfect estimator, I’ve been using the price at which FTX bankruptcy claims are trading at, which I think is currently at around 60%, suggesting more like $4B missing (claims of Alameda Research are trading at 15%, driving that number down further, but I don’t know what fraction of the liabilities were Alameda claims).
Yep that’s fair, there is some subjectivity here. I was hoping that the charges from SDNY would have a specific amount that Sam was alleged to have defrauded, but they don’t seem to.
Regarding $4B missing: adding in Anthropic gets another $4B on the EA side of the ledger, and founders pledge another $1B. The value produced by Anthropic is questionable, and maybe negative of course, but I think by the strict definition of “donated or built in terms of successful companies” EA comes out ahead.
(And OpenAI gets another $80B, so if you count that then I think even the most aggressive definition of how much FTX defrauded is smaller. But obviously OAI’s EA credentials are dubious.)
Well, I mean, I think making money off of building doomsday machines goes on the cost side of the ledger, but I do think it applies to the specific point I made above and I think that’s fair. Anthropic is quite successful at a scale that is not that incomparable to the size of the FTX fraud.
We have Wildeford’s Third Law: “Most >10 year forecasts are technically also AI forecasts”.
We need a law like “Most statements about the value of EA are technically also AI forecasts”.