Edit: (One person reading this reports below that this made them more reluctant to come forward with their story, and so that seems bad to me. I have mentally updated as a result. More relevant discussion below.)
I notice that there’s not that much information public about what Geoff actually Did and Did Not Do. Or what he instigated and what he did not. Or what he intended or what he did not intend.
Um, I would like more direct evidence of what he actually did and did not do. This is cruxy for me in terms of what should happen next.
Right now, based just on the Medium post, one plausible take is that the people in Geoff’s immediate circle may have been taking advantage of their relative power in the hierarchy to abuse the people under them.
See this example from Zoe:
A few weeks after this big success, this person told me my funding was in question — they had done all they could do to train me and thought I might be too blocked to sufficiently progress into a Master on the project. They and Geoff were questioning my commitment to and understanding of the project, and they had concerns about my debugging trajectory.
“They and Geoff” makes it sound like Zoe’s supervisor basically name-dropped Geoff as a way to add weight to a scare tactic. Like “better watch out cuz the boss thinks you’re not committed enough...” But it’s not really clear what the boss actually said or did not say… This supervisor might just be using a move. (I welcome additional clarity.)
The most directly ‘damning’ thing, as far as I can tell, is Geoff pressuring people to sign NDAs.
A lot of the other stuff seems like it’s due to the people around Geoff elevating him to an unreasonable pedestal and treating him like a savior. Maybe Geoff should have done more to stop this from escalating / done more to make people chill out about him and his supposed specialness. But him failing to control his flock is a different failure from him feeding them lies or requiring worship. I’m not seeing any statements about this. I welcome more information and clarity.
I am wanting clarity here because I am very aware of people’s strong desire for a [cult] leader. It can be pretty severe. And this is very much a co-participation between leaders and followers.
I know what it’s like from the inside to want someone to be my cult leader, god or parent figure. And I have low-tolerance for narratives that try to take my personal agency away from me—that claim I was a victim of mind control or whatever, rather than someone who bottom-level gave up my power to them.
Even if I didn’t consciously give away my power and it just sort of happened, I think it’s still wrong to write a narrative where I merely blame the other person and absolve myself of all responsibility or agency. This sounds unhealthy to hold onto, as a story.
I’m def not trying to absolve Geoff (or anyone) of responsibility, accountability, or agency. But also ew scapegoating is gross?
My main desire is for more information, or for people to realize that we might not be meeting relevant cruxes for how to move forward, and that we should continue to investigate and hold off on taking heavy actions.
The most directly ‘damning’ thing, as far as I can tell, is Geoff pressuring people to sign NDAs.
I received an email from a Paradigm board member on behalf of Paradigm and Leverage that aims to provide some additional clarity on the information-sharing situation here. Since the email specifies that it can be shared, I’ve uploaded it to my Google Drive (with some names and email addresses redacted). You can view it here.
[Disclosure: I work at Leverage, but did not work at Leverage during Leverage 1.0. I’m sharing this email in a personal rather than a professional capacity.]
I believe this is public information if I look for your 990s, but could you or someone list the Board members of Leverage / Paradigm, including changes over time?
I don’t know how realistic this worry is, but I’m a bit worried about scenarios like:
A signatory doesn’t share important-to-share info because they interpret the lnformation Arrangement doc (even with the added comments) as too constraining.
My sense is that there’s still a lot of ambiguity about exactly how to interpret parts of the agreement? And although the doc says it “is meant to be based on norms of good behavior in society” I don’t see a clause explicitly allowing people’s personal consciences to supersede the agreement. (I might just have missed it.)
Or: A signatory doesn’t share important-to-share info because they see the original agreement as binding, not the new “clarifications and perspective today” comments.
(I don’t know how scrupulous ex-Leveragers are about sticking to signed informal agreements, but if the agreement has moral force, I could imagine some people going ‘the author can’t arbitrarily reinterpret the agreement post facto, when the agreement didn’t specify that you have this power’.
Indeed, signing a document with binding moral force seems pretty risky to me if the author has lots of leeway to later reinterpret what parts of the agreement mean. But maybe I’m misunderstanding the social context or ethical orientation of the Leveragers—I might be reading the agreement way more strictly than anyone construed it at the time.)
Is there any reason not to just say something like ‘to the extent we have the power to void this agreement, the whole agreement is now void’? People could then still listen to their consciences, and your recommendations, about what to do next; but I’d be less worried anyone feels constrained by having signed the thing. I don’t know the late-Leverage-1.0 people well, but I currently have more faith in y’alls moral judgment than in your moral-judgment-constrained-by-this-verbal-commitment.
The main reason I could imagine it being bad to say ‘this is void now’ is if there’s an ex-Leverager you think is super irresponsible, but who you convinced to sign the agreement—someone who you’d expect to make terrible reckless decisions if they weren’t bound by the thing.
But in that case I’d still think it makes sense to void the agreement for the people who are basically sensible and well-intentioned, which is hopefully just about everyone.
I don’t see a clause explicitly allowing people’s personal consciences to supersede the agreement. (I might just have missed it.)
It seems to me “this is not a legal agreement” is basically such a clause.
The main reason I could imagine it being bad to say ‘this is void now’ is if there’s an ex-Leverager you think is super irresponsible, but who you convinced to sign the agreement—someone who you’d expect to make terrible reckless decisions if they weren’t bound by the thing.
It seems that at the end of Leverage 1.0 groups were in conflict. There’s a strong interest in that conflict not playing out in a way where different people publish private information of each other and then retaliate in kind.
It might very well be that plenty of the ex-Leverages don’t speak out because they are afraid that private information about them will be openly published in retaliation if they do.
Or: A signatory doesn’t share important-to-share info because they see the original agreement as binding, not the new “clarifications and perspective today” comments.
Given that there’s a section of (10) Expected lessening it seems strange to me to see the original agreement as infinitely binding.
• Expect that the overall need for share restrictions will diminish, and that as a result we will wind down share restrictions over time, while still maintaining protection of sensitive information and people’s privacy
• If anyone concludes in the future that stronger information management is required, they should make efforts to educate others themselves, and should expect that that might be covered by some future arrangement, not this one
[...] The most important thing we want to clarify is that as far as we are concerned, at least, individuals should feel free to share their experiences or criticise Geoff or the organisations.
[… T]his document was never legally binding, was only signed by just over half of you, and almost none of you are current employees, so you are under no obligation to follow this document or the clarified interpretation here. [...]
I’m really happy to see this! Though I was momentarily confused by the “so” here—why would there be less moral obligation to uphold an agreement, just because the agreement isn’t legally binding, some other people involved didn’t sign it, and the signatory has switched jobs? Were those stipulated as things that would void the agreement?
My current interpretation is that Matt’s trying to say something more like ‘We never took this agreement super seriously and didn’t expect you to take it super seriously either, given the wording; we just wanted it as a temporary band-aid in the immediate aftermath of Leverage 1.0 dissolving, to avoid anyone taking hasty action while tensions were still high. Here’s a bunch of indirect signs that the agreement is no big deal and doesn’t have moral force years later in a very different context: (blah).’ It’s Bayesian evidence that the agreement is no big deal, not a deductive proof that the agreement is ~void. Is that right?
It might be tempting for some ex-Leverage people to use Geoff as the primary scapegoat rather than implicating themselves fully. So as more stories come out, I plan to be somewhat delicate with the evidence. The temptation to scapegoat a leader is pretty high and may even seem justifiable in a “ends justifies the means” kind of thinking.
I don’t seem to personally be OK with using misleading information or lies to bolster a case against a person, even if this ends up “saving” a lot of people. (I don’t think it actually saves them… people should come to grips with their own errors, not hide behind a fallback person.)
So… Leverage, I’m looking at you as a whole community! You’re not helpless peons of Geoff Anders.
When spiritual gurus go out of control, it’s not a one-man operation; there are corroborators, enablers, people who hid information, yes-men and sycophants, those too afraid to do the right thing or speak out against wrongdoing, those too protective of personal benefits they may be receiving (status, friends, food, housing), etc.
There’s stages of ‘coming to terms’ with something difficult. And a very basic model would be like
Defensive / protective stage. I am still blended and identified with a problematic pattern or culture, so I defend it. It feels like my own being is at stake or on the line. It’s hard to see what’s true, and I am partially in denial or in dissociation—although I myself may not realize it.
Mitosis stage. I am in the process of a painful identity-level separation from the pattern or culture. I start feeling anger towards it, some grief, horror, etc. It’s likely I feel victimized. For the sake of gaining clarity, a victim narrative is more helpful than the previous narrative of “the thing is actually good though” or whatever fog of denial I was in.
Grief stage. Even more open and full realization of what happened and its problematic nature. Realizing my own personal part in it and the extent to which my actions were my own and also contributed to harm. This can be a very difficult stage, and may come with shame, guilt, remorse, and immense sadness.
Letting go and integration stage. Happy relief comes when all the disparate parts are integrated and all is forgiven. I hold myself to a new, higher standard, and I hold others also to a higher standard. I feel good about where I stand now, with more clarity and compassion. I see clearly what the mistakes were and how to avoid them. I can guide or warn others from making similar mistakes. There’s no emotional or trauma residue left in me. My capacity has expanded to hold more complexity and diversity. I am more accepting of the past, can see from many perspectives, and ready to live fully present.
Stage 2 is a dangerous stage, and it is one I have been in, and where I was most volatile, angry, and likely to cause damage. Kind of wanting more common knowledge about this as a Thing so that we are collectively aware that damage is best minimized. Although I imagine disagreements with this.
But also, I think pretty close to ZERO people who were deeply affected (aside from Zoe, who hasn’t engaged beyond the post) have come forward in this thread. And I… guess we should talk about that.
I know from firsthand, that there were some pretty bad experiences in the incident that tore Leverage 1.0 apart, which nobody appears to feel able to talk about.
I am currently not at all optimistic that we’re managing to balance this correctly? I also want this to go right. I’m not quite sure how to do it.
That’s pretty fair. I am open to taking down this comment, or other comments I’ve made. (Not deleting them forever, I’ll save them offline or something.) Your feedback is helpful here and revealing to me, and I feel myself updating because of it.
I have commented somewhere else that I do not like LessWrong for this discussion… because a) It seems bad for justice to be served. and b) It removes a bunch of context data that I personally think is super relevant (including emotional, physical layers) and c) LW is absolutely not a place designed for healing or reconciliation… and it also seems only ‘okay’ for sense-making as a community. It is maybe better for sense-making at the individual intellectual level. So… I guess LW isn’t my favorite place for this discussion to be happening… I wonder what you think.
(Separately) I care about folks from Leverage. I am very fond of the ones I’ve met. Zoe charted me once, and I feel fondly about that. I’ve been charted a number of times at Leverage, and it was good, and I personally love CT charting / Belief Reporting and use, reference, and teach it to others to this day. Although it’s my own version now. I went to a Paradigm workshop once, as well as several parties or gatherings.
My felt sense of my time at the workshop (especially during more casual hang-out-y parts of it) is like a sense of sad distance… like, oh I would like to be friends with these people… but mentally / emotionally they seem “hard to access.”
I’m feeling compassion towards the ones who have suffered and are suffering. I don’t need to be personal friends with anyone, but … if there’s a way I can be of service, I am interested.
Open and free invitation: If anyone involved in the Leverage stuff in some way wants someone to hold space for you as you process things, I am open to offer that, over Zoom, in a confidential manner. (I am not very involved in the community normally, as I am committed to being at the Monastic Academy in Vermont for a long while, and I don’t engage in divisive / gossipy speech. It is wrong speech :P) Cat would probably vouch for me. But basically uhh, even if what you want to say would normally be totally crazy to most rationalists or even most Westerners, I have ventured so far outside the overton window that I doubt I’ll be taken aback. If that helps. :P
Since it’s mostly just pointers to stuff I’ve already said/implied… I’ll throw out a quick comment.
I would like it if somebody started something like a carefully-moderated private Facebook group, mostly of core people who were there, to come to grips with their experiences? I think this could be good.
I am slightly concerned that people who are still in the grips of “Leverage PR campaigning” tendencies, will start trying to take it over or otherwise poison the well? (Edit: Or conversely, that people who still feel really hurt or confused about it might lash out more than I’d wish. I personally, am more worried about the former.) I still think it might be good, overall.
Be sure to be clear EARLY about who you are inviting, and who you are excluding! It changes what people are willing to talk about.
...I am not personally the right person to do this, though.
(It is too easy to “other” me, if that makes sense.)
I feel like one of the only things the public LW thread could do here?
Is ensuring public awareness of some of the unreasonably-strong reality/truth-suppressive pressures that were at play here, that there were some ways in which secrecy agreements were leveraged pretty badly to avoid accountability for harms, and showing a public ramp-down of opportunities to do so in the future.
Along with doing what we can, to signal that we generally stand against people over-simplistically demonizing the people and organizations involved in this.
… unreasonably-strong reality/truth-suppressive pressures that were at play here, that there were some ways in which secrecy agreements were leveraged pretty badly to avoid accountability for harms …
Hmm. This seems worth highlighting.
The NDAs (plus pressure to sign) point to this.
…
( The rest of this might be triggering to anyone who’s been through gaslighting / culty experiences. Blunt descriptions of certain forms of control and subjugation. )
...
The rest of the truth-suppressive measures I can only speculate. Here’s a list of possible speculative mechanisms that come to mind, some of which were corroborated by Zoe’s report but not all:
Group hazing or activities that cause collective shame, making certain things hard to admit to oneself and others (plus, inserting a bucket error where ‘shameful activity’ is bucketed with ‘the whole project’ or something)
This could include implanting group delusions that are shameful to admit.
Threats to one’s physical person or loved ones for revealing things
Threats to one’s reputation or ability to acquire resources for revealing things
Deprivation used to negatively / positively reinforce certain behaviors or stories (“well, if you keep talking like that, we’re gonna have to take your phone / food / place to sleep”)
Gaslighting specific individuals or subgroups (“what you’re experiencing is in your own head; look at other people, they are doing fine, stop being crazy / stop ruining the vibe / stop blocking the project”)
A lot of things could fit into this category.
Causing dissociation. (Thus disconnecting a person from their true yes/no or making it harder for them to discern truth from fiction.) This is very common among modern humans, though, and doesn’t seem as evil-sounding as the other examples. Modern humans are already very dissociated afaict.
It would become more evil if it was intentionally exploited or amplified.
Dissociation could be generalized or selective. Selective seems more problematic because it could be harder to detect.
Pretending there is common knowledge or an obvious norm around what should be private / confidential, when there is not. (There is some of this going around rationalist spaces already.) “Don’t talk about X behind their back, that’s inappropriate.” or “That’s their private business, stay out of it.” <-- Said in situations where it’s not actually inappropriate or when claims of it being someone’s ‘private business’ is overreaching.
Deliberately introducing and enforcing a norm of privacy or confidentiality that breaks certain normal and healthy social accountability structures. (Compassionate gossip is healthy in groups, especially those living in residential community,. Rationalists seem to not get this though and tend to break Chesterton’s fence on this, but I attribute this to hubris. It seems worse to me if these norms are introduced out of self-serving fear.)
Sexual harassment, molestation, or assault. (This tends to result in silencing pretty effectively.)
Creating internal jockeying, using an artificial scarcity around status or other resources. A culture of oneupmanship. A culture of having to play ‘loyal’. People getting way too sucked into this game and having their motives hijacked. They internally align themselves with the interests of certain leaders or the group, leading to secrecy being part of their internal motivation system.
This one is really speculative, but if I imagine buying into the story that Geoff is like, a superintelligence basically, and can somehow predict my own thoughts and moves before I can, then … maybe I get paranoid about even having thoughts that go against (my projection of) his goals.
Basically, if I thought someone could legit read my mind and they were not-compassionate or if I thought that they could strategically outmaneuver me at every turn due to their overwhelming advantage, that might cause some fucked up stuff in my head that stays in there for a while.
“You can’t rely on your perspective / Everything is up for grabs.” All of your mental content—ideas, concepts, motions, etc.--are potentially good (and should be leaned more heavily on, overriding others) / bad (and should be ignored / downvoted / routed around / destroyed / pushed against), and more openness to change is better, and there’s no solid place from which you can stand and see things. Of course, this is in many ways true and useful; but leaning into this creates much more room for others to selectively up/downvote stuff in you to avoid you reaching conclusions they don’t want you to reach; or more likely, up/downvote conclusions, and have you rearrange yourself to harmonize with those judgements.
Trolling Hope placed in the project / leadership. Like: I care deeply that things go well in the world; the only way I concretely see that might happen, is through this project; so if this project is doomed, then there’s no Hope; so I may as well bet everything on worlds where the project isn’t doomed; so worlds where the project is doomed are irrelevant; so I don’t see / consider / admit X if X implies that the project is doomed, since X is entirely about irrelevant worlds.
Emotional reward conditioning. (This one is simple or obvious, but I think it’s probably actually a significant portion of many of these sorts of situations.) When you start to say information I don’t like, I’m angry at you, annoyed, frustrated, dismissive, scornful, derisive, insulting, blank-faced, uninterested, condescending, disgusted, creeped out, pained, hurt, etc. When you start to hide information I don’t like, or expound the opposite, I’m pleasant, endeared, happy, admiring, excited, etc. etc. Conditioning shades into + overlaps other tactics like stonewalling (blank-faced, aiming at learned helplessness), shaming, and running intereference (changing the subject), but conditioning has a particular systematic effect of making you “walk on eggshells” about certain things and feeling relief / safety when you stick to appropriate narratives. And this systematic effect can be very strong and persist even when you’re away from the people who put it there, if you didn’t perfectly compartmentalize how-to-please-them from everything else in your mind.
Do you have a suggestion for another forum that you think would be better?
In particular, do you have pointers to online forums that do incorporate the emotional and physical layers (“in a non-toxic way”, he adds, thinking of twitter). Or do you think that the best way to do this is just not online at all?
CFAR’s recent staff reunion seemed to do all right. It wasn’t, like, optimized for safety or making sure everyone was heard equally or something like that, but such features could be added if desired. Having skilled third-party facilitators seemed good.
Oh you said ‘online’. Uhhh.
Online fishbowl Double Cruxes would get us like … 30% of the way there maybe? Private / invite only ones?
One could run an online Group Process like thing too. Invite a group of people into a Zoom call, and facilitate certain breakout sessions? Ideally with facilitation in each breakout group?
I am not thinking very hard about it.
We need a lot of skill points in the community to make such things go well. I’m not sure how many skill points we’re at.
Meta: I think it makes some good points. I do not think it was THAT bad, and I think the discussion was good. I would keep it up, but it’s your call. Possibly adding an “Edit: (further complicated thoughts)” at the top? (Respect for thinking about it, though.)
I see what you’re doing? And I really appreciate that you are doing it.
...but simultaneously? You are definitely making me feel less safe to talk about my personal shit.
(My position on this is, and has always been: “I got a scar from Leverage 1.0. I am at least somewhat triggered; on both that level, and by echoes from a past experience. I am scared that me talking about my stuff, rather than doing my best to make and hold space, will scare more centrally-affected people off. And I know that some of those people, had an even WORSE experience than I did. In what was, frankly, a surreal and really awful experience for me.”)
Multiple times on this thread I’ve seen you make the point about figuring out what responsibility should fall on Geoff, and what should be attributed to his underlings.
I just want to point out that it is a pattern for powerful bad actors to be VERY GOOD at never explicitly giving a command for a bad thing to happen, while still managing to get all their followers on board and doing the bad thing that they only hinted at/ set up incentive structures for, etc.
Well-meaning but flawed leader sets up a system or culture that has blatant holes that allow abuse to happen. This was unintentional but they were careless or blind or ignorant, and this resulted in harm. (In this case, the leader should be held accountable, but there’s decent hope for correction.)
Of course, some of the ‘flawed’ thing might be shadow stuff, in which case it might be slippery and difficult to see, and the leader may have various coping mechanisms that make accountability difficult. I think this is often the case with leaders, and as far as I can tell, most leaders have shadow stuff, and it negatively impacts their groups, to varying degrees. (I’m worried about Geoff in this case because I think intelligence + competence + shadow stuff is a lot more difficult. The more intelligent and powerful you are, the longer you can keep outmaneuvering attempts to get you to see your own shadow; I’ve seen this kind of thing, it’s bad.)
The leader is not well-meaning and is deliberately exploitative in an intentional way. They created a system that was designed to exploit people systematically, and they lack care in their body or soul for the beings they hurt. They internally applaud when they come up with clever systems that avoid accountability or responsibility while gaining personal benefit. They hope they can keep this up forever. They have a deep-seated fear of failure, and they will do whatever it takes to avoid failure. (This feels more like Jeffrey Epstein.)
You could try to argue that this is also ‘shadow stuff’, but I think the intention matters. If the leader’s goal and desire was to create healthy and wholesome community and failed, this is different from the goal and plan being to exploit people.
But anyway, point is: I am wanting discernment on this level of detail. For the sake of knowing best interventions and moves.
I am not interested in putting blame on particular individuals. I am not interested in the epistemic question of who’s more or less responsible. I am interested in group dynamics without the question of who’s more or less responsible.
I’m not sure about this, and I don’t think you were trying to say this, but, I doubt that the two categories you gave usefully cover the space, even at this level of abstraction. Someone could be “well-meaning” in the sense of all their explicit, and even all their conscious, motives being compassionate, life-oriented, etc., while still systematically agentically cybernetically motivatedly causing and amplifying harm. I think you were getting at this in the sub-bullet-point, but the sort of person I’m describing would both meet the description “well-meaning; unintentional harm” and also this from your second bullet-point:
They created a system that was designed to exploit people systematically, and they lack care in their body or soul for the beings they hurt. They internally applaud when they come up with clever systems that avoid accountability or responsibility while gaining personal benefit. They hope they can keep this up forever. They have a deep-seated fear of failure, and they will do whatever it takes to avoid failure.
Maybe I’m just saying, I don’t know what you (or I, or anyone) mean by “well-meaning”: I don’t know what it is to be well-meaning, and I don’t know how we would know, and I don’t know what predictions to make if someone is well-meaning or not. (I’m not saying it’s not a thing, it’s very clearly a thing; it’s just that I want to develop our concepts more, because at least my concepts are pushed past the breaking point in abusive situations.) For example, someone might both (1) have never once consciously explicitly worked out any strategy or design to make it easier to harm people, and (2) across contexts, take actions that reliably develop/assemble a social field where people are being systematically harmed, and not update on information about how to not do that.
Maybe it would help to distinguish “categories of essence” from “categories of treatment”. Like, if someone is so drowning in their shadow that they reliably, proactively, systematically harm people, then a category of essence question is like, “in principle is there information that could update them to stop doing this”, and a category of treatement is like, “regardless of what they really are, we are going to treat them exactly like we’d treat a conscious, malevolent, deliberate exploiter”.
I appreciate the added discernment here. This is definitely the kind of conversation I’d like to be having. !
someone might both (1) have never once consciously explicitly worked out any strategy or design to make it easier to harm people, and (2) across contexts, take actions that reliably develop/assemble a social field where people are being systematically harmed, and not update on information about how to not do that.
Agree. I was including that in ‘shadow stuff’.
The main difference between well-meaning and not, I think for me, is that the well-meaning person is willing to start engaging in conversations or experimenting with new systems in order to help the problems be less. Even though it’s in their shadow and they cannot see it and it might take a lot to convince them, after some time period (which could be years!), they are game enough to start making changes, trying to see it, etc.
I believe Anna S is an example of such a well-meaning person, but also I think it took her a pretty long time to come to grips with the patterns? I think she’s still in the process of discerning it? But this seems normal. Normal human level thing. Not sociopathic Epstein thing.
More controversially perhaps, I think Brent Dill has the potential to see and eat his shadow (cuz I think he actually cares about people and I’ve seen his compassion), but as you put it, he is “so drowning in his shadow that he reliably, systematically harms people.” And I actually think it’s the compassionate thing to do to prevent him from harming more people.
So where does Geoff fall here? I am still in that inquiry.
While this is true cult enviroments by their nature allow other bad actors besides the leader often to allow to rise into positions of power within them.
I think the Osho community is a good example. Given that Osho himself was open about his community running the biggest bioterror attack on the US at the same which otherwise likely wouldn’t have been discovered, it doesn’t seem to me that he was the person most responsible for that but his right hand at the time.
As far as cult dynamics go it’s not only the leader getting his followers to do things either but also various followers acting in a way where they treat the leader as a guru whether or not the leader wants that to happen which in turn does often affect the mindset and actions of the leader.
At the moment it’s for example unclear to me to what extend CEA shares part of the responsibility for enabling Leverage.
My current sense? Is that both Unreal and I are basically doing a mix of “take an advocate role” and “using this as an opportunity to get some of what the community got wrong last time -with our own trauma- right.” But for different roles, and for different traumas.
It seemed worth being explicit and calling this out. (I don’t necessarily think this is bad? I also think both of us seem to have done a LOT of “processing our own shit’ already, which helps.)
But doing this is… exhausting for me, all the same. I also, personally, feel like I’ve taken up too much space for a bit. It’s starting to wear on me in ways I don’t endorse.
I’m going to take a step back from this for a week, and get myself to focus on living the rest of my life. After a week, I will circle back. In fact, I COMMIT to circling back.
And honestly? I have told several people about the exact nature of my Leverage trauma. I will tell at least several more people about it, before all of this is over.
It’s not going to vanish. I’ve already ensured that it can’t. I can’t quite commit to “going full public,” because that might be the wrong move? But I will not rest on this until I have done something broadly equivalent.
I am a little bit scared of some sort of attempts to undermine me emerging as a consequence, because there’s a trend in even the casual reports that leans in this direction? But if it happens, I will go public about THAT fact.
I am a lot less scared of the repercussions than almost anyone else would be. So, fuck it.
(But also? My experience doesn’t necessarily rule out “most of the bad that happened here was a total lack of guard-rails + culty death-spirals.” It would take some truly awful negligence to have that few guard-rails, and I would not want that person running a company again? But still, just fyi. Yeah, I know, I know, it undercuts the drama of my last statement.)
But if anyone wonders why I vanished? I’m taking a break. That is what I’m doing.
Edit: (One person reading this reports below that this made them more reluctant to come forward with their story, and so that seems bad to me. I have mentally updated as a result. More relevant discussion below.)
I notice that there’s not that much information public about what Geoff actually Did and Did Not Do. Or what he instigated and what he did not. Or what he intended or what he did not intend.
Um, I would like more direct evidence of what he actually did and did not do. This is cruxy for me in terms of what should happen next.
Right now, based just on the Medium post, one plausible take is that the people in Geoff’s immediate circle may have been taking advantage of their relative power in the hierarchy to abuse the people under them.
See this example from Zoe:
“They and Geoff” makes it sound like Zoe’s supervisor basically name-dropped Geoff as a way to add weight to a scare tactic. Like “better watch out cuz the boss thinks you’re not committed enough...” But it’s not really clear what the boss actually said or did not say… This supervisor might just be using a move. (I welcome additional clarity.)
The most directly ‘damning’ thing, as far as I can tell, is Geoff pressuring people to sign NDAs.
A lot of the other stuff seems like it’s due to the people around Geoff elevating him to an unreasonable pedestal and treating him like a savior. Maybe Geoff should have done more to stop this from escalating / done more to make people chill out about him and his supposed specialness. But him failing to control his flock is a different failure from him feeding them lies or requiring worship. I’m not seeing any statements about this. I welcome more information and clarity.
I am wanting clarity here because I am very aware of people’s strong desire for a [cult] leader. It can be pretty severe. And this is very much a co-participation between leaders and followers.
I know what it’s like from the inside to want someone to be my cult leader, god or parent figure. And I have low-tolerance for narratives that try to take my personal agency away from me—that claim I was a victim of mind control or whatever, rather than someone who bottom-level gave up my power to them.
Even if I didn’t consciously give away my power and it just sort of happened, I think it’s still wrong to write a narrative where I merely blame the other person and absolve myself of all responsibility or agency. This sounds unhealthy to hold onto, as a story.
I’m def not trying to absolve Geoff (or anyone) of responsibility, accountability, or agency. But also ew scapegoating is gross?
My main desire is for more information, or for people to realize that we might not be meeting relevant cruxes for how to move forward, and that we should continue to investigate and hold off on taking heavy actions.
I received an email from a Paradigm board member on behalf of Paradigm and Leverage that aims to provide some additional clarity on the information-sharing situation here. Since the email specifies that it can be shared, I’ve uploaded it to my Google Drive (with some names and email addresses redacted). You can view it here.
The email also links to the text of the information-sharing agreement in question with some additional annotations.
[Disclosure: I work at Leverage, but did not work at Leverage during Leverage 1.0. I’m sharing this email in a personal rather than a professional capacity.]
I do applaud explicitely clarifying that people are free to share their own experiences.
Thanks for sharing this. !
I believe this is public information if I look for your 990s, but could you or someone list the Board members of Leverage / Paradigm, including changes over time?
I don’t know how realistic this worry is, but I’m a bit worried about scenarios like:
A signatory doesn’t share important-to-share info because they interpret the lnformation Arrangement doc (even with the added comments) as too constraining.
My sense is that there’s still a lot of ambiguity about exactly how to interpret parts of the agreement? And although the doc says it “is meant to be based on norms of good behavior in society” I don’t see a clause explicitly allowing people’s personal consciences to supersede the agreement. (I might just have missed it.)
Or: A signatory doesn’t share important-to-share info because they see the original agreement as binding, not the new “clarifications and perspective today” comments.
(I don’t know how scrupulous ex-Leveragers are about sticking to signed informal agreements, but if the agreement has moral force, I could imagine some people going ‘the author can’t arbitrarily reinterpret the agreement post facto, when the agreement didn’t specify that you have this power’.
Indeed, signing a document with binding moral force seems pretty risky to me if the author has lots of leeway to later reinterpret what parts of the agreement mean. But maybe I’m misunderstanding the social context or ethical orientation of the Leveragers—I might be reading the agreement way more strictly than anyone construed it at the time.)
Is there any reason not to just say something like ‘to the extent we have the power to void this agreement, the whole agreement is now void’? People could then still listen to their consciences, and your recommendations, about what to do next; but I’d be less worried anyone feels constrained by having signed the thing. I don’t know the late-Leverage-1.0 people well, but I currently have more faith in y’alls moral judgment than in your moral-judgment-constrained-by-this-verbal-commitment.
The main reason I could imagine it being bad to say ‘this is void now’ is if there’s an ex-Leverager you think is super irresponsible, but who you convinced to sign the agreement—someone who you’d expect to make terrible reckless decisions if they weren’t bound by the thing.
But in that case I’d still think it makes sense to void the agreement for the people who are basically sensible and well-intentioned, which is hopefully just about everyone.
It seems to me “this is not a legal agreement” is basically such a clause.
It seems that at the end of Leverage 1.0 groups were in conflict. There’s a strong interest in that conflict not playing out in a way where different people publish private information of each other and then retaliate in kind.
It might very well be that plenty of the ex-Leverages don’t speak out because they are afraid that private information about them will be openly published in retaliation if they do.
Given that there’s a section of (10) Expected lessening it seems strange to me to see the original agreement as infinitely binding.
I’m really happy to see this! Though I was momentarily confused by the “so” here—why would there be less moral obligation to uphold an agreement, just because the agreement isn’t legally binding, some other people involved didn’t sign it, and the signatory has switched jobs? Were those stipulated as things that would void the agreement?
My current interpretation is that Matt’s trying to say something more like ‘We never took this agreement super seriously and didn’t expect you to take it super seriously either, given the wording; we just wanted it as a temporary band-aid in the immediate aftermath of Leverage 1.0 dissolving, to avoid anyone taking hasty action while tensions were still high. Here’s a bunch of indirect signs that the agreement is no big deal and doesn’t have moral force years later in a very different context: (blah).’ It’s Bayesian evidence that the agreement is no big deal, not a deductive proof that the agreement is ~void. Is that right?
Another thing I want to mentally watch out for:
It might be tempting for some ex-Leverage people to use Geoff as the primary scapegoat rather than implicating themselves fully. So as more stories come out, I plan to be somewhat delicate with the evidence. The temptation to scapegoat a leader is pretty high and may even seem justifiable in a “ends justifies the means” kind of thinking.
I don’t seem to personally be OK with using misleading information or lies to bolster a case against a person, even if this ends up “saving” a lot of people. (I don’t think it actually saves them… people should come to grips with their own errors, not hide behind a fallback person.)
So… Leverage, I’m looking at you as a whole community! You’re not helpless peons of Geoff Anders.
When spiritual gurus go out of control, it’s not a one-man operation; there are corroborators, enablers, people who hid information, yes-men and sycophants, those too afraid to do the right thing or speak out against wrongdoing, those too protective of personal benefits they may be receiving (status, friends, food, housing), etc.
There’s stages of ‘coming to terms’ with something difficult. And a very basic model would be like
Defensive / protective stage. I am still blended and identified with a problematic pattern or culture, so I defend it. It feels like my own being is at stake or on the line. It’s hard to see what’s true, and I am partially in denial or in dissociation—although I myself may not realize it.
Mitosis stage. I am in the process of a painful identity-level separation from the pattern or culture. I start feeling anger towards it, some grief, horror, etc. It’s likely I feel victimized. For the sake of gaining clarity, a victim narrative is more helpful than the previous narrative of “the thing is actually good though” or whatever fog of denial I was in.
Grief stage. Even more open and full realization of what happened and its problematic nature. Realizing my own personal part in it and the extent to which my actions were my own and also contributed to harm. This can be a very difficult stage, and may come with shame, guilt, remorse, and immense sadness.
Letting go and integration stage. Happy relief comes when all the disparate parts are integrated and all is forgiven. I hold myself to a new, higher standard, and I hold others also to a higher standard. I feel good about where I stand now, with more clarity and compassion. I see clearly what the mistakes were and how to avoid them. I can guide or warn others from making similar mistakes. There’s no emotional or trauma residue left in me. My capacity has expanded to hold more complexity and diversity. I am more accepting of the past, can see from many perspectives, and ready to live fully present.
Stage 2 is a dangerous stage, and it is one I have been in, and where I was most volatile, angry, and likely to cause damage. Kind of wanting more common knowledge about this as a Thing so that we are collectively aware that damage is best minimized. Although I imagine disagreements with this.
I basically agree with this.
But also, I think pretty close to ZERO people who were deeply affected (aside from Zoe, who hasn’t engaged beyond the post) have come forward in this thread. And I… guess we should talk about that.
I know from firsthand, that there were some pretty bad experiences in the incident that tore Leverage 1.0 apart, which nobody appears to feel able to talk about.
I am currently not at all optimistic that we’re managing to balance this correctly? I also want this to go right. I’m not quite sure how to do it.
That’s pretty fair. I am open to taking down this comment, or other comments I’ve made. (Not deleting them forever, I’ll save them offline or something.) Your feedback is helpful here and revealing to me, and I feel myself updating because of it.
I have commented somewhere else that I do not like LessWrong for this discussion… because a) It seems bad for justice to be served. and b) It removes a bunch of context data that I personally think is super relevant (including emotional, physical layers) and c) LW is absolutely not a place designed for healing or reconciliation… and it also seems only ‘okay’ for sense-making as a community. It is maybe better for sense-making at the individual intellectual level. So… I guess LW isn’t my favorite place for this discussion to be happening… I wonder what you think.
(Separately) I care about folks from Leverage. I am very fond of the ones I’ve met. Zoe charted me once, and I feel fondly about that. I’ve been charted a number of times at Leverage, and it was good, and I personally love CT charting / Belief Reporting and use, reference, and teach it to others to this day. Although it’s my own version now. I went to a Paradigm workshop once, as well as several parties or gatherings.
My felt sense of my time at the workshop (especially during more casual hang-out-y parts of it) is like a sense of sad distance… like, oh I would like to be friends with these people… but mentally / emotionally they seem “hard to access.”
I’m feeling compassion towards the ones who have suffered and are suffering. I don’t need to be personal friends with anyone, but … if there’s a way I can be of service, I am interested.
Open and free invitation: If anyone involved in the Leverage stuff in some way wants someone to hold space for you as you process things, I am open to offer that, over Zoom, in a confidential manner. (I am not very involved in the community normally, as I am committed to being at the Monastic Academy in Vermont for a long while, and I don’t engage in divisive / gossipy speech. It is wrong speech :P) Cat would probably vouch for me. But basically uhh, even if what you want to say would normally be totally crazy to most rationalists or even most Westerners, I have ventured so far outside the overton window that I doubt I’ll be taken aback. If that helps. :P
You can FB msg me or gmail me (unrealeel).
Since it’s mostly just pointers to stuff I’ve already said/implied… I’ll throw out a quick comment.
I would like it if somebody started something like a carefully-moderated private Facebook group, mostly of core people who were there, to come to grips with their experiences? I think this could be good.
I am slightly concerned that people who are still in the grips of “Leverage PR campaigning” tendencies, will start trying to take it over or otherwise poison the well? (Edit: Or conversely, that people who still feel really hurt or confused about it might lash out more than I’d wish. I personally, am more worried about the former.) I still think it might be good, overall.
Be sure to be clear EARLY about who you are inviting, and who you are excluding! It changes what people are willing to talk about.
...I am not personally the right person to do this, though.
(It is too easy to “other” me, if that makes sense.)
I feel like one of the only things the public LW thread could do here?
Is ensuring public awareness of some of the unreasonably-strong reality/truth-suppressive pressures that were at play here, that there were some ways in which secrecy agreements were leveraged pretty badly to avoid accountability for harms, and showing a public ramp-down of opportunities to do so in the future.
Along with doing what we can, to signal that we generally stand against people over-simplistically demonizing the people and organizations involved in this.
Hmm. This seems worth highlighting.
The NDAs (plus pressure to sign) point to this.
…
( The rest of this might be triggering to anyone who’s been through gaslighting / culty experiences. Blunt descriptions of certain forms of control and subjugation. )
...
The rest of the truth-suppressive measures I can only speculate. Here’s a list of possible speculative mechanisms that come to mind, some of which were corroborated by Zoe’s report but not all:
Group hazing or activities that cause collective shame, making certain things hard to admit to oneself and others (plus, inserting a bucket error where ‘shameful activity’ is bucketed with ‘the whole project’ or something)
This could include implanting group delusions that are shameful to admit.
Threats to one’s physical person or loved ones for revealing things
Threats to one’s reputation or ability to acquire resources for revealing things
Deprivation used to negatively / positively reinforce certain behaviors or stories (“well, if you keep talking like that, we’re gonna have to take your phone / food / place to sleep”)
Gaslighting specific individuals or subgroups (“what you’re experiencing is in your own head; look at other people, they are doing fine, stop being crazy / stop ruining the vibe / stop blocking the project”)
A lot of things could fit into this category.
Causing dissociation. (Thus disconnecting a person from their true yes/no or making it harder for them to discern truth from fiction.) This is very common among modern humans, though, and doesn’t seem as evil-sounding as the other examples. Modern humans are already very dissociated afaict.
It would become more evil if it was intentionally exploited or amplified.
Dissociation could be generalized or selective. Selective seems more problematic because it could be harder to detect.
Pretending there is common knowledge or an obvious norm around what should be private / confidential, when there is not. (There is some of this going around rationalist spaces already.) “Don’t talk about X behind their back, that’s inappropriate.” or “That’s their private business, stay out of it.” <-- Said in situations where it’s not actually inappropriate or when claims of it being someone’s ‘private business’ is overreaching.
Deliberately introducing and enforcing a norm of privacy or confidentiality that breaks certain normal and healthy social accountability structures. (Compassionate gossip is healthy in groups, especially those living in residential community,. Rationalists seem to not get this though and tend to break Chesterton’s fence on this, but I attribute this to hubris. It seems worse to me if these norms are introduced out of self-serving fear.)
Sexual harassment, molestation, or assault. (This tends to result in silencing pretty effectively.)
Creating internal jockeying, using an artificial scarcity around status or other resources. A culture of oneupmanship. A culture of having to play ‘loyal’. People getting way too sucked into this game and having their motives hijacked. They internally align themselves with the interests of certain leaders or the group, leading to secrecy being part of their internal motivation system.
This one is really speculative, but if I imagine buying into the story that Geoff is like, a superintelligence basically, and can somehow predict my own thoughts and moves before I can, then … maybe I get paranoid about even having thoughts that go against (my projection of) his goals.
Basically, if I thought someone could legit read my mind and they were not-compassionate or if I thought that they could strategically outmaneuver me at every turn due to their overwhelming advantage, that might cause some fucked up stuff in my head that stays in there for a while.
If this resonates with you, I am very sorry.
I welcome additions to this list.
“You can’t rely on your perspective / Everything is up for grabs.” All of your mental content—ideas, concepts, motions, etc.--are potentially good (and should be leaned more heavily on, overriding others) / bad (and should be ignored / downvoted / routed around / destroyed / pushed against), and more openness to change is better, and there’s no solid place from which you can stand and see things. Of course, this is in many ways true and useful; but leaning into this creates much more room for others to selectively up/downvote stuff in you to avoid you reaching conclusions they don’t want you to reach; or more likely, up/downvote conclusions, and have you rearrange yourself to harmonize with those judgements.
Trolling Hope placed in the project / leadership. Like: I care deeply that things go well in the world; the only way I concretely see that might happen, is through this project; so if this project is doomed, then there’s no Hope; so I may as well bet everything on worlds where the project isn’t doomed; so worlds where the project is doomed are irrelevant; so I don’t see / consider / admit X if X implies that the project is doomed, since X is entirely about irrelevant worlds.
Emotional reward conditioning. (This one is simple or obvious, but I think it’s probably actually a significant portion of many of these sorts of situations.) When you start to say information I don’t like, I’m angry at you, annoyed, frustrated, dismissive, scornful, derisive, insulting, blank-faced, uninterested, condescending, disgusted, creeped out, pained, hurt, etc. When you start to hide information I don’t like, or expound the opposite, I’m pleasant, endeared, happy, admiring, excited, etc. etc. Conditioning shades into + overlaps other tactics like stonewalling (blank-faced, aiming at learned helplessness), shaming, and running intereference (changing the subject), but conditioning has a particular systematic effect of making you “walk on eggshells” about certain things and feeling relief / safety when you stick to appropriate narratives. And this systematic effect can be very strong and persist even when you’re away from the people who put it there, if you didn’t perfectly compartmentalize how-to-please-them from everything else in your mind.
Do you have a suggestion for another forum that you think would be better?
In particular, do you have pointers to online forums that do incorporate the emotional and physical layers (“in a non-toxic way”, he adds, thinking of twitter). Or do you think that the best way to do this is just not online at all?
CFAR’s recent staff reunion seemed to do all right. It wasn’t, like, optimized for safety or making sure everyone was heard equally or something like that, but such features could be added if desired. Having skilled third-party facilitators seemed good.
Oh you said ‘online’. Uhhh.
Online fishbowl Double Cruxes would get us like … 30% of the way there maybe? Private / invite only ones?
One could run an online Group Process like thing too. Invite a group of people into a Zoom call, and facilitate certain breakout sessions? Ideally with facilitation in each breakout group?
I am not thinking very hard about it.
We need a lot of skill points in the community to make such things go well. I’m not sure how many skill points we’re at.
Meta: I think it makes some good points. I do not think it was THAT bad, and I think the discussion was good. I would keep it up, but it’s your call. Possibly adding an “Edit: (further complicated thoughts)” at the top? (Respect for thinking about it, though.)
I see what you’re doing? And I really appreciate that you are doing it.
...but simultaneously? You are definitely making me feel less safe to talk about my personal shit.
(My position on this is, and has always been: “I got a scar from Leverage 1.0. I am at least somewhat triggered; on both that level, and by echoes from a past experience. I am scared that me talking about my stuff, rather than doing my best to make and hold space, will scare more centrally-affected people off. And I know that some of those people, had an even WORSE experience than I did. In what was, frankly, a surreal and really awful experience for me.”)
Multiple times on this thread I’ve seen you make the point about figuring out what responsibility should fall on Geoff, and what should be attributed to his underlings.
I just want to point out that it is a pattern for powerful bad actors to be VERY GOOD at never explicitly giving a command for a bad thing to happen, while still managing to get all their followers on board and doing the bad thing that they only hinted at/ set up incentive structures for, etc.
I wanted to immediately agree. Now I’m pausing...
It seems good to try to distinguish between:
Well-meaning but flawed leader sets up a system or culture that has blatant holes that allow abuse to happen. This was unintentional but they were careless or blind or ignorant, and this resulted in harm. (In this case, the leader should be held accountable, but there’s decent hope for correction.)
Of course, some of the ‘flawed’ thing might be shadow stuff, in which case it might be slippery and difficult to see, and the leader may have various coping mechanisms that make accountability difficult. I think this is often the case with leaders, and as far as I can tell, most leaders have shadow stuff, and it negatively impacts their groups, to varying degrees. (I’m worried about Geoff in this case because I think intelligence + competence + shadow stuff is a lot more difficult. The more intelligent and powerful you are, the longer you can keep outmaneuvering attempts to get you to see your own shadow; I’ve seen this kind of thing, it’s bad.)
The leader is not well-meaning and is deliberately exploitative in an intentional way. They created a system that was designed to exploit people systematically, and they lack care in their body or soul for the beings they hurt. They internally applaud when they come up with clever systems that avoid accountability or responsibility while gaining personal benefit. They hope they can keep this up forever. They have a deep-seated fear of failure, and they will do whatever it takes to avoid failure. (This feels more like Jeffrey Epstein.)
You could try to argue that this is also ‘shadow stuff’, but I think the intention matters. If the leader’s goal and desire was to create healthy and wholesome community and failed, this is different from the goal and plan being to exploit people.
But anyway, point is: I am wanting discernment on this level of detail. For the sake of knowing best interventions and moves.
I am not interested in putting blame on particular individuals. I am not interested in the epistemic question of who’s more or less responsible. I am interested in group dynamics without the question of who’s more or less responsible.
I’m not sure about this, and I don’t think you were trying to say this, but, I doubt that the two categories you gave usefully cover the space, even at this level of abstraction. Someone could be “well-meaning” in the sense of all their explicit, and even all their conscious, motives being compassionate, life-oriented, etc., while still systematically agentically cybernetically motivatedly causing and amplifying harm. I think you were getting at this in the sub-bullet-point, but the sort of person I’m describing would both meet the description “well-meaning; unintentional harm” and also this from your second bullet-point:
Maybe I’m just saying, I don’t know what you (or I, or anyone) mean by “well-meaning”: I don’t know what it is to be well-meaning, and I don’t know how we would know, and I don’t know what predictions to make if someone is well-meaning or not. (I’m not saying it’s not a thing, it’s very clearly a thing; it’s just that I want to develop our concepts more, because at least my concepts are pushed past the breaking point in abusive situations.) For example, someone might both (1) have never once consciously explicitly worked out any strategy or design to make it easier to harm people, and (2) across contexts, take actions that reliably develop/assemble a social field where people are being systematically harmed, and not update on information about how to not do that.
Maybe it would help to distinguish “categories of essence” from “categories of treatment”. Like, if someone is so drowning in their shadow that they reliably, proactively, systematically harm people, then a category of essence question is like, “in principle is there information that could update them to stop doing this”, and a category of treatement is like, “regardless of what they really are, we are going to treat them exactly like we’d treat a conscious, malevolent, deliberate exploiter”.
I appreciate the added discernment here. This is definitely the kind of conversation I’d like to be having. !
Agree. I was including that in ‘shadow stuff’.
The main difference between well-meaning and not, I think for me, is that the well-meaning person is willing to start engaging in conversations or experimenting with new systems in order to help the problems be less. Even though it’s in their shadow and they cannot see it and it might take a lot to convince them, after some time period (which could be years!), they are game enough to start making changes, trying to see it, etc.
I believe Anna S is an example of such a well-meaning person, but also I think it took her a pretty long time to come to grips with the patterns? I think she’s still in the process of discerning it? But this seems normal. Normal human level thing. Not sociopathic Epstein thing.
More controversially perhaps, I think Brent Dill has the potential to see and eat his shadow (cuz I think he actually cares about people and I’ve seen his compassion), but as you put it, he is “so drowning in his shadow that he reliably, systematically harms people.” And I actually think it’s the compassionate thing to do to prevent him from harming more people.
So where does Geoff fall here? I am still in that inquiry.
While this is true cult enviroments by their nature allow other bad actors besides the leader often to allow to rise into positions of power within them.
I think the Osho community is a good example. Given that Osho himself was open about his community running the biggest bioterror attack on the US at the same which otherwise likely wouldn’t have been discovered, it doesn’t seem to me that he was the person most responsible for that but his right hand at the time.
As far as cult dynamics go it’s not only the leader getting his followers to do things either but also various followers acting in a way where they treat the leader as a guru whether or not the leader wants that to happen which in turn does often affect the mindset and actions of the leader.
At the moment it’s for example unclear to me to what extend CEA shares part of the responsibility for enabling Leverage.
My current sense? Is that both Unreal and I are basically doing a mix of “take an advocate role” and “using this as an opportunity to get some of what the community got wrong last time -with our own trauma- right.” But for different roles, and for different traumas.
It seemed worth being explicit and calling this out. (I don’t necessarily think this is bad? I also think both of us seem to have done a LOT of “processing our own shit’ already, which helps.)
But doing this is… exhausting for me, all the same. I also, personally, feel like I’ve taken up too much space for a bit. It’s starting to wear on me in ways I don’t endorse.
I’m going to take a step back from this for a week, and get myself to focus on living the rest of my life. After a week, I will circle back. In fact, I COMMIT to circling back.
And honestly? I have told several people about the exact nature of my Leverage trauma. I will tell at least several more people about it, before all of this is over.
It’s not going to vanish. I’ve already ensured that it can’t. I can’t quite commit to “going full public,” because that might be the wrong move? But I will not rest on this until I have done something broadly equivalent.
I am a little bit scared of some sort of attempts to undermine me emerging as a consequence, because there’s a trend in even the casual reports that leans in this direction? But if it happens, I will go public about THAT fact.
I am a lot less scared of the repercussions than almost anyone else would be. So, fuck it.
(But also? My experience doesn’t necessarily rule out “most of the bad that happened here was a total lack of guard-rails + culty death-spirals.” It would take some truly awful negligence to have that few guard-rails, and I would not want that person running a company again? But still, just fyi. Yeah, I know, I know, it undercuts the drama of my last statement.)
But if anyone wonders why I vanished? I’m taking a break. That is what I’m doing.