I don’t agree with the characterization of this topic as self-obsessed community gossip. For context, I’m quite new and don’t have a dog in the fight. But I drew memorable conclusions from this that I couldn’t have gotten from more traditional posts
First, experimenting with our own psychology is tempting and really dangerous. Next time, I’d turn up the caution dial way higher than Leverage did
Second, a lot of us (probably including me) have an exploitable weakness brought on high scrupulously combined with openness to crazy-sounding ideas. Next time, I’d be more cautious (but not too cautious!) about proposals like joining Leverage
Third, if we ever need to maintain the public’s goodwill, I’ll try not to use words like “demonic seance”… even if I don’t mean it literally
In short, this is the sort of mistake worth learning about, including for those not personally affected, because it’s the kind of mistake we could plausibly make again. I think it’s useful to have here, and the right attitude for the investigation is “what do these events teach us about how rationalist groups can go wrong?” I also don’t think posting a summary would’ve been sufficient. It was necessary to hear Geoff and Anna’s exact words
In fact, what I’d really like to see from this is Leverage and CFAR’s actual research, including negative results
What experiments did they try? Is there anything true and surprising that came out of this? What dead ends did they discover (plus the evidence that these are truly dead ends)?
It’d be especially interesting if someone annotated Geoff’s giant agenda flowchart with what they were thinking at the time and what, if anything, they actually tried
Also interested in the root causes of the harms that came to Zoe et al. Is this an inevitable consequence of Leverage’s beliefs? Or do the particular beliefs not really matter, and it’s really about the social dynamics in their group house?
Probably not what you wanted, but you can read CFAR’s handbook and updates (where they also reflect on some screwups). I am not aware of Leverage having anything equivalent publicly available.
I don’t agree with the characterization of this topic as self-obsessed community gossip. For context, I’m quite new and don’t have a dog in the fight. But I drew memorable conclusions from this that I couldn’t have gotten from more traditional posts
First, experimenting with our own psychology is tempting and really dangerous. Next time, I’d turn up the caution dial way higher than Leverage did
Second, a lot of us (probably including me) have an exploitable weakness brought on high scrupulously combined with openness to crazy-sounding ideas. Next time, I’d be more cautious (but not too cautious!) about proposals like joining Leverage
Third, if we ever need to maintain the public’s goodwill, I’ll try not to use words like “demonic seance”… even if I don’t mean it literally
In short, this is the sort of mistake worth learning about, including for those not personally affected, because it’s the kind of mistake we could plausibly make again. I think it’s useful to have here, and the right attitude for the investigation is “what do these events teach us about how rationalist groups can go wrong?” I also don’t think posting a summary would’ve been sufficient. It was necessary to hear Geoff and Anna’s exact words
In fact, what I’d really like to see from this is Leverage and CFAR’s actual research, including negative results
What experiments did they try? Is there anything true and surprising that came out of this? What dead ends did they discover (plus the evidence that these are truly dead ends)?
It’d be especially interesting if someone annotated Geoff’s giant agenda flowchart with what they were thinking at the time and what, if anything, they actually tried
Also interested in the root causes of the harms that came to Zoe et al. Is this an inevitable consequence of Leverage’s beliefs? Or do the particular beliefs not really matter, and it’s really about the social dynamics in their group house?
Probably not what you wanted, but you can read CFAR’s handbook and updates (where they also reflect on some screwups). I am not aware of Leverage having anything equivalent publicly available.