What is this f***ing post-mortem? What was the root-cause analysis? Where is the list of changes that have been made to prevent an impulsive and immoral man like Sam taking tons of resources, talent and prestige from the Effective Altruism ecosystem and performing crimes of a magnitude for which a typical human lifetime is not long enough to make right? Was it due to the rapid growth beyond the movement’s ability to vet people? Was it due to people in leadership being afraid to investigate accusations of misbehavior? What was the cause here that has been fixed?
Please do not claim that things have been fixed without saying concretely what you believe has been fixed. I have seen far too many people continue roughly business as usual. It sickens me.
Do you agree with my comparative claim? EA vs. Democrats or Republicans, for example, or EA vs. Social Justice, or EA vs. idk pick some other analogous movement.
I could make a bigass list of EA forum and LW posts arguing about how to interpret what happened and lashing out with various bits of blame here and there. Pretty much all of the lessons/criticisms Zvi makes in this post have been made multiple times before. Including by e.g. Habryka, whom I respect greatly and admire for doing so. But I don’t feel motivated to make this list and link it here because I’m pretty sure you’ve read it all too; our disagreement is not about the list but whether the list is enough.
Notice, also, that I didn’t actually say “The problem is fixed.” I instead expressed doubt in the “not anything like enough” claim. I mused that it would be good to make some forecastable predictions. This was because I myself am unsure about what to think here. I have appreciated the discussions and attempted lesson-taking from the SBF disaster and I’m glad it’s happening & I support doing more of it.
[I feel like this conversation is getting somewhat heated btw; if you like I’d be happy to have a phone or video call or just plan to meet up in person some day if you like. This is not an ask, just an offer just in case you’d like that.]
I think your description of what happened with SBF is accurate, and I think that it is significantly less likely to happen again given how the SBF thing blew up and how people have reacted to it. (I take it we disagree about this.)
That said, I’m maybe not as plugged in to the EA community as I should be? idk. I’d be curious to hear what your concerns are—e.g. are there people who seem to you to be impulsive and immoral and on a path to gain in prestige and influence? Fair enough if you don’t want to say who they are (though I’d love that) but I’d be curious to hear whether you have specific people in mind or are just saying abstractly that we are still vulnerable to this failure mode since we haven’t fixed the root causes.
I think if I were to answer that question, I’d probably say something like “I know of one or two very sketchy people but they don’t seem to have much influence. Then there is Anthropic which seems to me to be a potential sbf-risk: lots of idealistic people, very mission-driven, lots of trust in leadership, making very important decisions about the fate of humanity. Could go very badly if leadership is bad in the ways SBF was bad. That said I don’t expect that to be the case & would be curious to get concrete and make forecasts and hear evidence. I currently think Anthropic is net-positive in expectation, which is saying a lot since it’s an AGI company and I think there’s somthing like a 70% chance of unaligned AGI takeover by the end of this decade.”
I certainly agree this is possible. Insofar as you think that’s not only possible but actual, then thanks, that’s a helpful clarification of your position. Had you said something like this above I probably wouldn’t have objected, at least not as strongly, and instead would have just asked for predictions.
Do you agree with my comparative claim? EA vs. Democrats or Republicans, for example, or EA vs. Social Justice, or EA vs. idk pick some other analogous movement.
What I expect for a movement of this scale or larger where a prominent figure has a scandal of this level, is that many people wring their hands over it, some minor changes, people taking lots of defensive PR actions, but nobody is in a position to really fix the underlying problems and it isn’t really tried. Some substantive status allocation changes and trust is lowered, and then it continues on regardless. I currently cannot distinguish the Effective Altruism ecosystem from this standard story. Beyond FTX, who has been fired?Who has stepped forward and taken responsibility? Who has admitted to major fault?
I suspect the main thing that has happened better in the EA ecosystem does less actively criminal or unethical behavior in the cover-up and in the PR defense, while not actually fixing anything. That is a low bar and this is still best described as “a failure”.
I also think any of those movements/ecosystems would have a ton of energy for discussion and finger-pointing and attempts to use this issue to change people’s status. Perhaps you are misreading “lots of bickering” as “there has been a reckoning”. The EA Forum is filled with squabbling of this sort and is a substantial reason for why I do not read it.
I am outraged, and I don’t know which emotion is stronger: my utter rage at Sam (and others?) for causing such harm to so many people, or my sadness and self-hatred for falling for this deception.
...
If FTX misused customer funds, then I personally will have much to reflect on. Sam and FTX had a lot of goodwill – and some of that goodwill was the result of association with ideas I have spent my career promoting. If that goodwill laundered fraud, I am ashamed.
As a community, too, we will need to reflect on what has happened, and how we could reduce the chance of anything like this from happening again. Yes, we want to make the world better, and yes, we should be ambitious in the pursuit of that.
But that in no way justifies fraud. If you think that you’re the exception, you’re duping yourself.
We must make clear that we do not see ourselves as above common-sense ethical norms, and must engage criticism with humility.
I know that others from inside and outside of the community have worried about the misuse of EA ideas in ways that could cause harm. I used to think these worries, though worth taking seriously, seemed speculative and unlikely.
I was probably wrong. I will be reflecting on this in the days and months to come, and thinking through what should change.
I read this as an admission of guilt and responsibility. What do you wish he had said?
I read this as an admission of guilt and responsibility. What do you wish he had said?
Does it matter what he said? What has he done? As far as I’m aware he is mostly getting along with being a prominent figurehead of EA and a public intellectual.
Also this is hardly an admission of guilt. It primarily says “This seems bad and I will reflect on it.” He didn’t say
“This theft of many thousands of people’s life savings will forever be part of the legacy of Effective Altruism, and I must ensure that this movement is not responsible for something even worse in the future. I take responsibility for endorsing and supporting this awful person and for playing a key role in building an ecosystem in which he thrived. I have failed in my leadership position and I will work to make sure this great injustice cannot happen again and that the causes are rectified, and if I cannot accomplish that with confidence within 12 months then I will no longer publicly support the Effective Altruism movement.”
I read this as an admission of guilt and responsibility. What do you wish he had said?
I think it’s a decent opening and it clearly calls for reflection, but you might notice that indeed no further reflection has been published, and Will has not published anything that talks much about what lessons he has taken away from them.
To be clear, as I understand the situation Will did indeed write up a bunch of reflections, but then the EV board asked him not to because that posed too much legal and PR risk. I agree this is some evidence about Will showing some remorse, but also evidence that the overall leadership does not care very much about people learning from what happened (at least compared to increased PR and legal risk).
I think this is a potentially large cost of the fiscal sponsorship umbrella. Will can’t take on the risk personally or even for just his org, it’s automatically shared with a ton of other orgs.
That seems quite plausible. If that is his reasoning, then I think he should say that.
“I had planned to write in more details about my relationship to Sam and FTX, what actions I took, and in what ways I think my actions did and did not enable these crimes to take place; but due to concerns about risking the jobs of 100+ people I have chosen to not share information about this for the following 1-4 years (that is, until any legal and financial investigation of Effective Ventures has concluded, an org that I’m on the board of and that facilitated a lot of financial grantmaking for FTX).
This obviously largely prohibits the Effective Altruism ecosystem from carrying out a collective fact-finding effort around those who were closely involved with Sam and FTX within the next 1-4 years, and substantially obstructs a clear fault analysis and post-mortem from occurring, and I expect as a result of this many readers should correctly update that by-default that the causes of these problems will not be fixed.
I hope that this is not the death of the Effective Altruism ecosystem that I have worked to build over the last 10+ years, but I am not sure how people working and living in this ecosystem can come to trust that crimes of a similar magnitude will not happen again after seeing little-to-no accounting of how this criminal was funded and supported, nor any clear fixes implemented in the ecosystem to prevent such crimes from occurring in the future, and I sadly expect many good people will rightly leave the ecosystem because of it.”
I wish he had said (perhaps after some time to ponder) “I now realize that SBF used FTX to steal customer funds. SBF and FTX had a lot of goodwill, that I contributed to, and I let those people and the entire community down.
As a community, we need to recognize that this happened in part because of us. And I recognize that this happened partly because of me, in particular. Yes, we want to make the world better, and yes, we should be ambitious in the pursuit of that. But we have been doing so in a way that we can now see can set people on extremely dark and destructive paths.
No promise to do good justifies fraud, or the encouragement of fraud. We have to find a philosophy that does not drive people towards fraud.
We must not see or treat ourselves as above common-sense ethical norms, and must engage criticism with humility. We must fundamentally rethink how to embody utilitarianism where it is useful, within such a framework, recognizing that saying ‘but don’t lie or do fraud’ at the end often does not work.
I know others have worried that our formulation of EA ideas could lead people to do harm. I used to think this was unlikely. I now realize it was not, and that this was part of a predictable pattern that we must end, so that we can be a force for good once more.
I was wrong. I will continue to reflect in the coming months.”
And then, ya know, reflect, and do some things.
The statement he actually made I interpret as a plea for time to process while affirming the bare minimum. Where was his follow-up?
Your proposal seems to me to be pretty similar to what he actually said, just a bit stronger here and there. Ben’s proposal below, by contrast, is much stiffer stuff, mostly because of the last sentence.
Can you give an example of a movement of this scale or larger that had a scandal of this level, and reacted better than EA did?
None come to mind. (To be clear, this doesn’t seem cruxy for whether Effective Altruism has succeeded at reforming itself.)
I think instructive examples to look into would be things like:
How the justice system itself investigates crimes. I really like reading published reports where an investigator has been given a lot of resources to figure something out and then writes up what they learned. In many countries it is illegal to lie to an investigator when they are investigating a crime, which means that someone can go around and just ask what happened, then share that and prosecute any unlawful behavior.
How countries deal with their own major human rights violations. I am somewhat interested in understanding things like how the Truth and Reconciliation process went in South Africa, and also how Germany has responded post WWII, where I think both really tried to reform to ensure that the same thing couldn’t happen again.
How companies investigate disasters. Sometimes a massive company will have a disaster or screw-up (e.g. the BP Oil Spill, the Boeing crashes, Johnson & Johnson Tylenol poisoning incident) and sometimes conduct serious investigations and try to fix the problem. I’d be interested in reading successful accounts there and how they went about finding the source of the problem and fixing it.
Religious reformations. The Protestant split was in response to a bunch of theological and pragmatic disagreements and also concerns of corruption (the clergy leading lavish lives). I’d prefer to not have a split and instead have a reform, I suspect there are other instances of major religious reform that went well that one can learn lessons from (of course also many to avoid).
Can you give an example of a movement of this scale or larger that had a scandal of this level, and reacted better than EA did?
I think almost any large organization/company would have gone through a much more comprehensive fault-analysis and would have made many measurable improvements. For examples of this you could read through the history of Apple, or Tesla, or TSMC, or Intel. You could also look into the reforms that happened to lots of investment banks post 2008.
Companies are different than social movements, though my sense is that in the history of religion there have also been many successful reform efforts in response to various crises, which seems more similar.
As another interesting example, it also seems to me that Germany pretty successfully reformed its government and culture post World-War 2.
I think Germany is an extreme outlier here fwiw, (eg) Japan did far worse things and after WW2 cared more about covering up wrongdoing than with admitting fault; further, Germany’s government and cultural “reformation” was very much strongarmed by the US and other allies, whereas the US actively assisted Japan in covering up war crimes.
I started writing a comment reply to elaborate after getting some disagreevotes on the parent comment, but decided that it’d be a distraction from the main conversation; I might expand on my position in an LW shortform at some point in the near future.
I think almost any large organization/company would have gone through a much more comprehensive fault-analysis and would have made many measurable improvements.
Notice, also, that I didn’t actually say “The problem is fixed.” I instead expressed doubt in the “not anything like enough” claim. I mused that it would be good to make some forecastable predictions. This was because I myself am unsure about what to think here. I have appreciated the discussions and attempted lesson-taking from the SBF disaster and I’m glad it’s happening & I support doing more of it.
I feel like you’re implicitly saying that anything has really changed! I am finding it hard to think of a world where less would have changed after a scandal this big.
I think your description of what happened with SBF is accurate, and I think that it is significantly less likely to happen again given how the SBF thing blew up and how people have reacted to it.
It is common the case that the exact same failure will not repeat itself. I think that the broader civilization does not have the skill of avoiding the same thing from happening (e.g. if a second covid came along I do not much expect that civilization would do more than 2x better the second time around, whereas I think one could obviously do 10x-30x better) and so the Effective Altruism movement is doing less dysfunctionally on this measure, in that there will probably not be another $8B crypto fraud. I think this is primarily just because many people have rightly lost trust in the Effective Altruism ecosystem and will not support it as much, but not because the underlying generators that were untrustworthy have been fixed.
I mused that it would be good to make some forecastable predictions.
I don’t know how to operationalize things here! I think there are underlying generators that give a lot of power and respect to people without vetting them or caring about obvious ways in which they are low-integrity, un-principled, and unethical. Most Effective Altruism orgs are in the non-profit sector. I think most people involved will not have the opportunity to have their low ethical standards be displayed so undeniably as someone involved in a crypto scam, and I do not expect there is going to be consensus about other scandals in the way there is about this one. So I don’t really know what to forecast, other than “a list of people you and I both consider high-integrity will stop participating in the Effective Altruism movement and ecosystem within the next 5 years”, but that’s a fairly indirect measure.
I think future catastrophes will also not look the same as past catastrophes because a lot of the underlying ecosystem has changed (number of people, amount of money, growth of AI, etc). That’s another reason why it’s hard to predict things.
I think there are underlying generators that give a lot of power and respect to people without vetting them or caring about obvious ways in which they are low-integrity, un-principled, and unethical. … I think most people involved will not have the opportunity to have their low ethical standards be displayed so undeniably...
Would you like to say more about this? I’m curious if there are examples you can talk about publicly.
I feel like this conversation is getting somewhat heated btw; if you like I’d be happy to have a phone or video call or just plan to meet up in person some day if you like. This is not an ask, just an offer just in case you’d like that.
Thanks. I don’t feel much in the way of anger toward you personally, I’m primarily angry about the specific analysis of the situation that you are using here (and which I expect many others share). I still like you personally and respect a bunch of your writing on AI takeoff (and more). I don’t currently feel like asking you to talk about this offline. (I’d be open to dialoguing more about it if you wanted that because I like discussing things in dialogue in general, but I’m not asking for that.)
I guess I should know better by now, but it still astonishes me that EAs can set such abysmally low standards for themselves while simultaneously representing themselves as dramatically more ethical than everyone else.
...compared to what? Seriously what groups of people are you comparing to? Among the people in my extended network who see themselves as altruists, EAs seem to hold themselves and each other to the highest standards, and also seem to actually be more ethical than the rest. My extended network consists of tech company workers, academics, social justice types, and EAs. (Well and rationalists too, I’m not counting them.)
I agree this is a low bar in some absolute sense—and there are definitely social movements in the world today (especially religious ones) that are better in both dimensions. There’s a lot of room for improvement. And I very much support these criticisms and attempts at reform. But I’m just calling it like I see it here; it would be dishonest grandstanding of me to say the sentence Zvi wrote in the OP, at least not without giving additional context.
I think most organizations the size of EA have formal accountability mechanisms that attempt to investigate claims of fraud and abuse in some kind of more-or-less fair and transparent way. Of course, the actual fairness and effectiveness of such mechanisms can be debated, but at least the need for them is acknowledged. The attitude among EAs, on the other hand, seems to be that EAs are all too smart and good to have any real need for accountability.
What is this f***ing post-mortem? What was the root-cause analysis? Where is the list of changes that have been made to prevent an impulsive and immoral man like Sam taking tons of resources, talent and prestige from the Effective Altruism ecosystem and performing crimes of a magnitude for which a typical human lifetime is not long enough to make right? Was it due to the rapid growth beyond the movement’s ability to vet people? Was it due to people in leadership being afraid to investigate accusations of misbehavior? What was the cause here that has been fixed?
Please do not claim that things have been fixed without saying concretely what you believe has been fixed. I have seen far too many people continue roughly business as usual. It sickens me.
Do you agree with my comparative claim? EA vs. Democrats or Republicans, for example, or EA vs. Social Justice, or EA vs. idk pick some other analogous movement.
I could make a bigass list of EA forum and LW posts arguing about how to interpret what happened and lashing out with various bits of blame here and there. Pretty much all of the lessons/criticisms Zvi makes in this post have been made multiple times before. Including by e.g. Habryka, whom I respect greatly and admire for doing so. But I don’t feel motivated to make this list and link it here because I’m pretty sure you’ve read it all too; our disagreement is not about the list but whether the list is enough.
Notice, also, that I didn’t actually say “The problem is fixed.” I instead expressed doubt in the “not anything like enough” claim. I mused that it would be good to make some forecastable predictions. This was because I myself am unsure about what to think here. I have appreciated the discussions and attempted lesson-taking from the SBF disaster and I’m glad it’s happening & I support doing more of it.
[I feel like this conversation is getting somewhat heated btw; if you like I’d be happy to have a phone or video call or just plan to meet up in person some day if you like. This is not an ask, just an offer just in case you’d like that.]
I think your description of what happened with SBF is accurate, and I think that it is significantly less likely to happen again given how the SBF thing blew up and how people have reacted to it. (I take it we disagree about this.)
That said, I’m maybe not as plugged in to the EA community as I should be? idk. I’d be curious to hear what your concerns are—e.g. are there people who seem to you to be impulsive and immoral and on a path to gain in prestige and influence? Fair enough if you don’t want to say who they are (though I’d love that) but I’d be curious to hear whether you have specific people in mind or are just saying abstractly that we are still vulnerable to this failure mode since we haven’t fixed the root causes.
I think if I were to answer that question, I’d probably say something like “I know of one or two very sketchy people but they don’t seem to have much influence. Then there is Anthropic which seems to me to be a potential sbf-risk: lots of idealistic people, very mission-driven, lots of trust in leadership, making very important decisions about the fate of humanity. Could go very badly if leadership is bad in the ways SBF was bad. That said I don’t expect that to be the case & would be curious to get concrete and make forecasts and hear evidence. I currently think Anthropic is net-positive in expectation, which is saying a lot since it’s an AGI company and I think there’s somthing like a 70% chance of unaligned AGI takeover by the end of this decade.”
I don’t feel confident about any of this.
I think the following can be and are both true at once:
What happened was not anything like enough.
What happened was more than one would expect from a political party, or a social movement such as social justice.
I certainly agree this is possible. Insofar as you think that’s not only possible but actual, then thanks, that’s a helpful clarification of your position. Had you said something like this above I probably wouldn’t have objected, at least not as strongly, and instead would have just asked for predictions.
What I expect for a movement of this scale or larger where a prominent figure has a scandal of this level, is that many people wring their hands over it, some minor changes, people taking lots of defensive PR actions, but nobody is in a position to really fix the underlying problems and it isn’t really tried. Some substantive status allocation changes and trust is lowered, and then it continues on regardless. I currently cannot distinguish the Effective Altruism ecosystem from this standard story. Beyond FTX, who has been fired? Who has stepped forward and taken responsibility? Who has admitted to major fault?
I suspect the main thing that has happened better in the EA ecosystem does less actively criminal or unethical behavior in the cover-up and in the PR defense, while not actually fixing anything. That is a low bar and this is still best described as “a failure”.
I also think any of those movements/ecosystems would have a ton of energy for discussion and finger-pointing and attempts to use this issue to change people’s status. Perhaps you are misreading “lots of bickering” as “there has been a reckoning”. The EA Forum is filled with squabbling of this sort and is a substantial reason for why I do not read it.
That’s helpful thanks. Can you give an example of a movement of this scale or larger that had a scandal of this level, and reacted better than EA did?
IIRC Will MacAskill admitted to major fault, though I don’t remember what he said and wasn’t paying close attention. Here’s the statement I remembered: A personal statement on FTX — EA Forum (effectivealtruism.org)
I read this as an admission of guilt and responsibility. What do you wish he had said?
Does it matter what he said? What has he done? As far as I’m aware he is mostly getting along with being a prominent figurehead of EA and a public intellectual.
Also this is hardly an admission of guilt. It primarily says “This seems bad and I will reflect on it.” He didn’t say
I think it’s a decent opening and it clearly calls for reflection, but you might notice that indeed no further reflection has been published, and Will has not published anything that talks much about what lessons he has taken away from them.
To be clear, as I understand the situation Will did indeed write up a bunch of reflections, but then the EV board asked him not to because that posed too much legal and PR risk. I agree this is some evidence about Will showing some remorse, but also evidence that the overall leadership does not care very much about people learning from what happened (at least compared to increased PR and legal risk).
I think this is a potentially large cost of the fiscal sponsorship umbrella. Will can’t take on the risk personally or even for just his org, it’s automatically shared with a ton of other orgs.
That seems quite plausible. If that is his reasoning, then I think he should say that.
Pretty big if true. If EV actively is censoring attempts to reflect upon what happened, then that is important information to pin down.
I would hope that if someone tried to do that to me, I would resign.
That’s what I told Will to do. He felt like that would be uncollaborative with broader EA leadership.
I wish he had said (perhaps after some time to ponder) “I now realize that SBF used FTX to steal customer funds. SBF and FTX had a lot of goodwill, that I contributed to, and I let those people and the entire community down.
As a community, we need to recognize that this happened in part because of us. And I recognize that this happened partly because of me, in particular. Yes, we want to make the world better, and yes, we should be ambitious in the pursuit of that. But we have been doing so in a way that we can now see can set people on extremely dark and destructive paths.
No promise to do good justifies fraud, or the encouragement of fraud. We have to find a philosophy that does not drive people towards fraud.
We must not see or treat ourselves as above common-sense ethical norms, and must engage criticism with humility. We must fundamentally rethink how to embody utilitarianism where it is useful, within such a framework, recognizing that saying ‘but don’t lie or do fraud’ at the end often does not work.
I know others have worried that our formulation of EA ideas could lead people to do harm. I used to think this was unlikely. I now realize it was not, and that this was part of a predictable pattern that we must end, so that we can be a force for good once more.
I was wrong. I will continue to reflect in the coming months.”
And then, ya know, reflect, and do some things.
The statement he actually made I interpret as a plea for time to process while affirming the bare minimum. Where was his follow-up?
Your proposal seems to me to be pretty similar to what he actually said, just a bit stronger here and there. Ben’s proposal below, by contrast, is much stiffer stuff, mostly because of the last sentence.
None come to mind. (To be clear, this doesn’t seem cruxy for whether Effective Altruism has succeeded at reforming itself.)
I think instructive examples to look into would be things like:
How the justice system itself investigates crimes. I really like reading published reports where an investigator has been given a lot of resources to figure something out and then writes up what they learned. In many countries it is illegal to lie to an investigator when they are investigating a crime, which means that someone can go around and just ask what happened, then share that and prosecute any unlawful behavior.
How countries deal with their own major human rights violations. I am somewhat interested in understanding things like how the Truth and Reconciliation process went in South Africa, and also how Germany has responded post WWII, where I think both really tried to reform to ensure that the same thing couldn’t happen again.
How companies investigate disasters. Sometimes a massive company will have a disaster or screw-up (e.g. the BP Oil Spill, the Boeing crashes, Johnson & Johnson Tylenol poisoning incident) and sometimes conduct serious investigations and try to fix the problem. I’d be interested in reading successful accounts there and how they went about finding the source of the problem and fixing it.
Religious reformations. The Protestant split was in response to a bunch of theological and pragmatic disagreements and also concerns of corruption (the clergy leading lavish lives). I’d prefer to not have a split and instead have a reform, I suspect there are other instances of major religious reform that went well that one can learn lessons from (of course also many to avoid).
I think almost any large organization/company would have gone through a much more comprehensive fault-analysis and would have made many measurable improvements. For examples of this you could read through the history of Apple, or Tesla, or TSMC, or Intel. You could also look into the reforms that happened to lots of investment banks post 2008.
Companies are different than social movements, though my sense is that in the history of religion there have also been many successful reform efforts in response to various crises, which seems more similar.
As another interesting example, it also seems to me that Germany pretty successfully reformed its government and culture post World-War 2.
I think Germany is an extreme outlier here fwiw, (eg) Japan did far worse things and after WW2 cared more about covering up wrongdoing than with admitting fault; further, Germany’s government and cultural “reformation” was very much strongarmed by the US and other allies, whereas the US actively assisted Japan in covering up war crimes.
EDIT: See shortform elaboration: https://www.lesswrong.com/posts/s58hDHX2GkFDbpGKD/linch-s-shortform?commentId=ywf8R3CobzdkbTx3d
Here are some notes on why I think Imperial Japan was unusually bad, even by the very low bar set by the Second World War.
Curious why you say “far worse” rather than “similarly bad” though this isn’t important to the main conversation.
I started writing a comment reply to elaborate after getting some disagreevotes on the parent comment, but decided that it’d be a distraction from the main conversation; I might expand on my position in an LW shortform at some point in the near future.
Update: OK, now I agree. I encourage you to make a post on it.
I claim YCombinator is a counter example.
(The existence of one counterexample obviously doesn’t disagree with the “almost any” claim.)
I feel like you’re implicitly saying that anything has really changed! I am finding it hard to think of a world where less would have changed after a scandal this big.
It is common the case that the exact same failure will not repeat itself. I think that the broader civilization does not have the skill of avoiding the same thing from happening (e.g. if a second covid came along I do not much expect that civilization would do more than 2x better the second time around, whereas I think one could obviously do 10x-30x better) and so the Effective Altruism movement is doing less dysfunctionally on this measure, in that there will probably not be another $8B crypto fraud. I think this is primarily just because many people have rightly lost trust in the Effective Altruism ecosystem and will not support it as much, but not because the underlying generators that were untrustworthy have been fixed.
I don’t know how to operationalize things here! I think there are underlying generators that give a lot of power and respect to people without vetting them or caring about obvious ways in which they are low-integrity, un-principled, and unethical. Most Effective Altruism orgs are in the non-profit sector. I think most people involved will not have the opportunity to have their low ethical standards be displayed so undeniably as someone involved in a crypto scam, and I do not expect there is going to be consensus about other scandals in the way there is about this one. So I don’t really know what to forecast, other than “a list of people you and I both consider high-integrity will stop participating in the Effective Altruism movement and ecosystem within the next 5 years”, but that’s a fairly indirect measure.
I think future catastrophes will also not look the same as past catastrophes because a lot of the underlying ecosystem has changed (number of people, amount of money, growth of AI, etc). That’s another reason why it’s hard to predict things.
Would you like to say more about this? I’m curious if there are examples you can talk about publicly.
Thanks. I don’t feel much in the way of anger toward you personally, I’m primarily angry about the specific analysis of the situation that you are using here (and which I expect many others share). I still like you personally and respect a bunch of your writing on AI takeoff (and more). I don’t currently feel like asking you to talk about this offline. (I’d be open to dialoguing more about it if you wanted that because I like discussing things in dialogue in general, but I’m not asking for that.)
I guess I should know better by now, but it still astonishes me that EAs can set such abysmally low standards for themselves while simultaneously representing themselves as dramatically more ethical than everyone else.
...compared to what? Seriously what groups of people are you comparing to? Among the people in my extended network who see themselves as altruists, EAs seem to hold themselves and each other to the highest standards, and also seem to actually be more ethical than the rest. My extended network consists of tech company workers, academics, social justice types, and EAs. (Well and rationalists too, I’m not counting them.)
I agree this is a low bar in some absolute sense—and there are definitely social movements in the world today (especially religious ones) that are better in both dimensions. There’s a lot of room for improvement. And I very much support these criticisms and attempts at reform. But I’m just calling it like I see it here; it would be dishonest grandstanding of me to say the sentence Zvi wrote in the OP, at least not without giving additional context.
I think most organizations the size of EA have formal accountability mechanisms that attempt to investigate claims of fraud and abuse in some kind of more-or-less fair and transparent way. Of course, the actual fairness and effectiveness of such mechanisms can be debated, but at least the need for them is acknowledged. The attitude among EAs, on the other hand, seems to be that EAs are all too smart and good to have any real need for accountability.