After discussing the matter with some other (non-Leverage) EAs, we’ve decided to wire $15,000 to Zoe Curzi (within 35 days).
A number of ex-Leveragers seem to be worried about suffering (financial, reputational, etc.) harm if they come forward with information that makes Leverage look bad (and some also seem worried about suffering harm if they come forward with information that makes Leverage look good). This gift to Zoe is an attempt to signal support for people who come forward with accounts like hers, so that people in Zoe’s reference class are more inclined to come forward.
We’ve temporarily set aside $85,000 in case others write up similar accounts—in particular, accounts where it would be similarly useful to offset the incentives against speaking up. We plan to use our judgment to assess reports on a case-by-case basis, rather than having an official set of criteria. (It’s hard to design formal criteria that aren’t gameable, and we were a bit wary of potentially setting up an incentive for people to try to make up false bad narratives about organizations, etc.)
Note that my goal isn’t to evaluate harms caused by Leverage and try to offset such harms. Instead, it’s trying to offset any incentives against sharing risky honest accounts like Zoe’s.
Full disclosure: I worked with a number of people from Leverage between 2015 and 2018. I have a pretty complicated, but overall relatively negative view of Leverage (as shown in my comments), though my goal here is to make it less costly for people around Leverage to share important evidence, not to otherwise weigh in on the object-level inquiry into what happened. Also, this comment was co-authored with some EAs who helped get the ball rolling on this, so it probably isn’t phrased the way I would have fully phrased it myself.
Note that my goal isn’t to evaluate harms caused by Leverage and try to offset such harms. Instead, it’s trying to offset any incentives against sharing risky honest accounts like Zoe’s.
I like the careful disambiguation here.
FWIW, I independently proposed something similar to a friend in the Lightcone office last week, with an intention that was related to offsetting harm. My reasoning:
There’s often a problem in difficult “justice” situations, where people have only a single bucket for “make the sufferer feel better” and “address the wrong that was done.”
This is quite bad—it often causes people to either do too little for victims, or too much to offenders, because they’re trying to achieve two goals at once and one goal dominates the calculation. Not helping someone materially because the harm proved unintentional, or punishing the active party way in excess of what they “deserve” because that’s what it takes to make the injured party feel better, that sort of thing.
Separating it out into “we’re still figuring out the Leverage situation but in the meantime, let’s try to make this person’s life a little better” is excellent.
Reiterating that I understand that’s not what you are doing, here. But I think that would separately have also been a good thing.
1) This seems great, and I’m impressed by the agency and speed.
2) From reading the comments, it seems like several people were actively afraid of how Leverage could retaliate. I imagine similar for accusations/whistleblowing for other organizations. I think this is both very, very bad, and unnecessary; as a whole, the community is much more powerful than individual groups, so it seems poorly managed when the community is scared of a specific group. Resources should be spent to cancel this out.
In light of this, if more money were available, it seems easy to justify a fair bit more. Or even better could be something like, “We’ll help fund lawyers in case you’re attacked legally, or anti-harassing teams if you’re harassed or trolled”. This is similar to how the EFF helps with cases from small people/groups being attacked by big companies.
I don’t mean to complain; I think any steps here, especially so quickly are fantastic.
3) I’m afraid this will get lost in this comment section. I’d be excited about a list of “things to keep in mind” like this to be repeatedly made prominent somehow. For example, I could imagine that at community events or similar, there could be necessary papers like, “Know your rights, as a Rationalist/EA”, which flags how individuals can report bad actors and behavior.
4) Obviously a cash prize can encourage lying, but I think this can be decently managed. (It’s a small community, so if there’s good moderation, $15K would be very little compared to the social stigma that would come and you’ve found out to have destructively lied for $15k)
After discussing the matter with some other (non-Leverage) EAs, we’ve decided to wire $15,000 to Zoe Curzi (within 35 days).
A number of ex-Leveragers seem to be worried about suffering (financial, reputational, etc.) harm if they come forward with information that makes Leverage look bad (and some also seem worried about suffering harm if they come forward with information that makes Leverage look good). This gift to Zoe is an attempt to signal support for people who come forward with accounts like hers, so that people in Zoe’s reference class are more inclined to come forward.
We’ve temporarily set aside $85,000 in case others write up similar accounts—in particular, accounts where it would be similarly useful to offset the incentives against speaking up. We plan to use our judgment to assess reports on a case-by-case basis, rather than having an official set of criteria. (It’s hard to design formal criteria that aren’t gameable, and we were a bit wary of potentially setting up an incentive for people to try to make up false bad narratives about organizations, etc.)
Note that my goal isn’t to evaluate harms caused by Leverage and try to offset such harms. Instead, it’s trying to offset any incentives against sharing risky honest accounts like Zoe’s.
Full disclosure: I worked with a number of people from Leverage between 2015 and 2018. I have a pretty complicated, but overall relatively negative view of Leverage (as shown in my comments), though my goal here is to make it less costly for people around Leverage to share important evidence, not to otherwise weigh in on the object-level inquiry into what happened. Also, this comment was co-authored with some EAs who helped get the ball rolling on this, so it probably isn’t phrased the way I would have fully phrased it myself.
I like the careful disambiguation here.
FWIW, I independently proposed something similar to a friend in the Lightcone office last week, with an intention that was related to offsetting harm. My reasoning:
There’s often a problem in difficult “justice” situations, where people have only a single bucket for “make the sufferer feel better” and “address the wrong that was done.”
This is quite bad—it often causes people to either do too little for victims, or too much to offenders, because they’re trying to achieve two goals at once and one goal dominates the calculation. Not helping someone materially because the harm proved unintentional, or punishing the active party way in excess of what they “deserve” because that’s what it takes to make the injured party feel better, that sort of thing.
Separating it out into “we’re still figuring out the Leverage situation but in the meantime, let’s try to make this person’s life a little better” is excellent.
Reiterating that I understand that’s not what you are doing, here. But I think that would separately have also been a good thing.
A few quick thoughts:
1) This seems great, and I’m impressed by the agency and speed.
2) From reading the comments, it seems like several people were actively afraid of how Leverage could retaliate. I imagine similar for accusations/whistleblowing for other organizations. I think this is both very, very bad, and unnecessary; as a whole, the community is much more powerful than individual groups, so it seems poorly managed when the community is scared of a specific group. Resources should be spent to cancel this out.
In light of this, if more money were available, it seems easy to justify a fair bit more. Or even better could be something like, “We’ll help fund lawyers in case you’re attacked legally, or anti-harassing teams if you’re harassed or trolled”. This is similar to how the EFF helps with cases from small people/groups being attacked by big companies.
I don’t mean to complain; I think any steps here, especially so quickly are fantastic.
3) I’m afraid this will get lost in this comment section. I’d be excited about a list of “things to keep in mind” like this to be repeatedly made prominent somehow. For example, I could imagine that at community events or similar, there could be necessary papers like, “Know your rights, as a Rationalist/EA”, which flags how individuals can report bad actors and behavior.
4) Obviously a cash prize can encourage lying, but I think this can be decently managed. (It’s a small community, so if there’s good moderation, $15K would be very little compared to the social stigma that would come and you’ve found out to have destructively lied for $15k)