Examples of EA errors as failures of wholesomeness
In this comment (cross-posted from the EA forum) I’ll share a few examples of things I mean as failures of wholesomeness. I don’t really mean to over-index on these examples. I actually feel like a decent majority of what I wish that EA had been doing differently relates to this wholesomeness stuff. However, I’m choosing examples that are particularly easy to talk about — around FTX and around mistakes I’ve made — because I have good visibility of them, and in order not to put other people on the spot. Although I’m using these examples to illustrate my points, my beliefs don’t hinge too much on the particulars of these cases. (But the fact that the “failures of wholesomeness” frame can be used to provide insight on a variety of different types of error does increase the degree to which I think there’s a deep and helpful insight here.)
Fraud at FTX To the extent that the key people at FTX were motivated by EA reasons, it looks like a catastrophic failure of wholesomeness — most likely supported by a strong desire for expedience and a distorted picture where people’s gut sense of what was good was dominated by the terms where they had explicit models of impact on EA-relevant areas. It is uncomfortable to think that people could have caused this harm while believing they were doing good, but I find that it has some plausibility. It is hard to imagine that they would have made the same mistakes if they had explicitly held “be wholesome” as a major desideratum in their decision-making.
EA relationship to FTX Assume that we don’t get to intervene to change SBF’s behaviour. I still think that EA would have had a healthier relationship with FTX if it had held wholesomeness as a core virtue. I think many people had some feeling of unwholesomeness associated with FTX, even if they couldn’t point to all of the issues. I think focusing on this might have helped EA to keep FTX at more distance, not to extol SBF so much just for doing a great job at making a core metric ($ to be donated) go up, etc. It could have gone a long way to reducing inappropriate trust, if people felt that their degree of trust in other individuals or organizations should vary not just with who espouses EA principles, but with how much people act wholesomely in general.
My relationship to attraction I had an unhealthy relationship to attraction, and took actions which caused harm. (I might now say that I related to my attraction as unwholesome — arguably a mistake in itself, but compounded because I treated that unwholesomeness as toxic and refused to think about it. This blinded me to a lot of what was going on for other people, which led to unwholesome actions.)
Though I now think my actions were wrong, at some level I felt at the time like I was acting rightly. But (though I never explicitly thought in these terms), I do not think I would have felt like I was acting wholesomely. So if wholesomeness had been closer to a core part of my identity I might have avoided the harms — even without getting to magically intervene to fix my mistaken beliefs.
(Of course this isn’t precisely an EA error, as I wasn’t regarding these actions as in pursuit of EA — but it’s still very much an error where I’m interested in how I could have avoided it via a different high-level orientation.)
Wytham Abbey Although I still think that the Wytham Abbey project was wholesome in its essence, in retrospect I think that I was prioritizing expedience over wholesomeness in choosing to move forward quickly and within the EV umbrella. I think that the more wholesome thing to do would have been, up front, to establish a new charity with appropriate governance structures. This would have been more inconvenient, and slowed things down — but everything would have been more solid, more auditable in its correctness. Given the scale of the project and its potential to attract public scrutiny, having a distinct brand that was completely separate from “the Centre for Effective Altruism” would have been a real benefit.
I knew at the time that that wasn’t entirely the wholesome way to proceed. I can remember feeling “you know, it would be good to sort out governance properly — but this isn’t urgent, so maybe let’s move on and revisit this later”. Of course there were real tradeoffs there, and I’m less certain than for the other points that there was a real error here; but I think I was a bit too far in the direction of wanting expedience, and expecting that we’d be able to iron out small unwholesomenesses later. Leaning further towards caring about wholesomeness might have led to more-correct actions.
Examples of EA errors as failures of wholesomeness
In this comment (cross-posted from the EA forum) I’ll share a few examples of things I mean as failures of wholesomeness. I don’t really mean to over-index on these examples. I actually feel like a decent majority of what I wish that EA had been doing differently relates to this wholesomeness stuff. However, I’m choosing examples that are particularly easy to talk about — around FTX and around mistakes I’ve made — because I have good visibility of them, and in order not to put other people on the spot. Although I’m using these examples to illustrate my points, my beliefs don’t hinge too much on the particulars of these cases. (But the fact that the “failures of wholesomeness” frame can be used to provide insight on a variety of different types of error does increase the degree to which I think there’s a deep and helpful insight here.)
Fraud at FTX
To the extent that the key people at FTX were motivated by EA reasons, it looks like a catastrophic failure of wholesomeness — most likely supported by a strong desire for expedience and a distorted picture where people’s gut sense of what was good was dominated by the terms where they had explicit models of impact on EA-relevant areas. It is uncomfortable to think that people could have caused this harm while believing they were doing good, but I find that it has some plausibility. It is hard to imagine that they would have made the same mistakes if they had explicitly held “be wholesome” as a major desideratum in their decision-making.
EA relationship to FTX
Assume that we don’t get to intervene to change SBF’s behaviour. I still think that EA would have had a healthier relationship with FTX if it had held wholesomeness as a core virtue. I think many people had some feeling of unwholesomeness associated with FTX, even if they couldn’t point to all of the issues. I think focusing on this might have helped EA to keep FTX at more distance, not to extol SBF so much just for doing a great job at making a core metric ($ to be donated) go up, etc. It could have gone a long way to reducing inappropriate trust, if people felt that their degree of trust in other individuals or organizations should vary not just with who espouses EA principles, but with how much people act wholesomely in general.
My relationship to attraction
I had an unhealthy relationship to attraction, and took actions which caused harm. (I might now say that I related to my attraction as unwholesome — arguably a mistake in itself, but compounded because I treated that unwholesomeness as toxic and refused to think about it. This blinded me to a lot of what was going on for other people, which led to unwholesome actions.)
Though I now think my actions were wrong, at some level I felt at the time like I was acting rightly. But (though I never explicitly thought in these terms), I do not think I would have felt like I was acting wholesomely. So if wholesomeness had been closer to a core part of my identity I might have avoided the harms — even without getting to magically intervene to fix my mistaken beliefs.
(Of course this isn’t precisely an EA error, as I wasn’t regarding these actions as in pursuit of EA — but it’s still very much an error where I’m interested in how I could have avoided it via a different high-level orientation.)
Wytham Abbey
Although I still think that the Wytham Abbey project was wholesome in its essence, in retrospect I think that I was prioritizing expedience over wholesomeness in choosing to move forward quickly and within the EV umbrella. I think that the more wholesome thing to do would have been, up front, to establish a new charity with appropriate governance structures. This would have been more inconvenient, and slowed things down — but everything would have been more solid, more auditable in its correctness. Given the scale of the project and its potential to attract public scrutiny, having a distinct brand that was completely separate from “the Centre for Effective Altruism” would have been a real benefit.
I knew at the time that that wasn’t entirely the wholesome way to proceed. I can remember feeling “you know, it would be good to sort out governance properly — but this isn’t urgent, so maybe let’s move on and revisit this later”. Of course there were real tradeoffs there, and I’m less certain than for the other points that there was a real error here; but I think I was a bit too far in the direction of wanting expedience, and expecting that we’d be able to iron out small unwholesomenesses later. Leaning further towards caring about wholesomeness might have led to more-correct actions.