Geoff had everyone sign an unofficial NDA upon leaving agreeing not to talk badly about Leverage
I really don’t like this. Could I see the NDA somehow? If the wording equally forbids sharing good and bad stuff about Leverage, then I’m much less bothered by this. Likewise if the wording forbids going into certain details, but lets former staff criticize Leverage at a sufficient level of abstraction.
Otherwise, this seems very epistemically distorting to me, and in a direction that things already tend to be distorted (there’s pressure against people saying bad stuff about their former employer). How am I supposed to form accurate models of Leverage if former employees can’t even publicly say ‘yeah, I didn’t like working at Leverage’??
One of my supervisors would regularly talk about this as a daunting but inevitable strategic reality (“obviously we’ll do it, and succeed, but seems hard”).
“It” here refers to ‘taking over the US government’, which I assume means something like ‘have lots of smart aligned EAs with very Leverage-y strategic outlooks rise to the top decision-making ranks of the USG’. If I condition on ‘Leverage staff have a high probability of succeeding here’, then I could imagine that a lot of the factors justifying confidence are things that I don’t know about (e.g., lots of people already in high-ranking positions who are quietly very Leverage-aligned). But absent a lot of hidden factors like that, this seems very overconfident to me, and I’m surprised if this really was a widespread Leverage view.
In retrospect, the guy clearly needed help (he was talking to G-d, believed he was learning from Kant himself live across time, and felt the project was missing the importance of future contact w/ aliens — this was not a joke)
???? I’m so confused about what happened here. The aliens part (as stated) isn’t a red flag for me, but the Kant thing seem transparently crazy to me. I have to imagine there’s something being lost in translation here, and missing context for why people didn’t immediately see that this person was having a mental breakdown?
For example, it wasn’t uncommon to hear “Connection Theory is the One True Theory of Psychology,” “Geoff is the best philosopher who has ever lived” “Geoff is maybe the only mind who has ever existed who is capable of saving the world” or “Geoff’s theoretical process is world-historical.”
A crux for me is that I don’t think Geoff’s philosophy heuristics are that great. He’s a very smart and nimble reasoner, but I’m not aware of any big cool philosophy things from him, and I do think he’s very wrong about the hard problem of consciousness (though admittedly I think this is one of the hardest tests of philosophy-heuristics, and breaks a lot of techniques that normally work elsewhere).
If I updated toward ‘few if any Leveragers really believed Geoff was that amazing a philosopher’, or toward ‘Geoff really is that amazing of a philosopher’, I’d put a lot less weight on the hypothesis ‘Leverage 1.0 was systematically bad-at-epistemics on a bunch of core things they spent lots of time thinking about’.
???? I’m so confused about what happened here. The aliens part (as stated) isn’t a red flag for me, but the Kant thing seem transparently crazy to me. I have to imagine there’s something being lost in translation here, and missing context for why people didn’t immediately see that this person was having a mental breakdown?
FWIW, my own experience is that people often miss fairly blatant psychotic episodes; so I’m not sure how Leverage-specific the explanation needs to be for this one. For example, once I came to believe that an acquaintance was having a psychotic episode and suggested he see a psychiatrist; the psychiatrist agreed. A friend who’d observed most of the same data I had asked me how I’d known. I said it was several things, but that the bit where our acquaintance said God was talking to him through his cereal box was one of the tip-offs from my POV. My friend’s response was “oh, I thought that was a metaphor.” I know several different stories like this one, including a later instance where I was among those who missed what in hindsight was fairly blatant evidence that someone was psychotic, none of which involved weird group-level beliefs or practices.
I’d guess that the people in question had a mostly normal air to them during the episode, just starting to say weird things?
Most people’s conception of a psychotic episode probably involves a sense of the person acting like a stereotypical obviously crazy person on the street. Whereas if it’s someone they already know and trust, just acting slightly more eccentric than normal, people seem likely to filter everything the person says through a lens of “my friend’s not crazy so if they do sound crazy, it’s probably a metaphor or else I’m misunderstanding what they’re trying to say”.
???? I’m so confused about what happened here. The aliens part (as stated) isn’t a red flag for me, but the Kant thing seem transparently crazy to me. I have to imagine there’s something being lost in translation here, and missing context for why people didn’t immediately see that this person was having a mental breakdown?
I would imagine that other people saw his relationship to Kant as something like Kant being a Shoulder Advisor, maybe with additional steps to make it feel more real.
In an enviroment where some people do seances and use crystals to clean negative energy, they might have thought that if you believe in the realness of rituals things get more effective. So someone who manages to get to the position to literally believe they are talking to Kant instead of just to some abstraction of their mind of Kant being more powerful.
I do think they messed up here by not understanding why truth is valuable, but I can see how things played out that way.
If I condition on ‘Leverage staff have a high probability of succeeding here’, then I could imagine that a lot of the factors justifying confidence are things that I don’t know about (e.g., lots of people already in high-ranking positions who are quietly very Leverage-aligned). But absent a lot of hidden factors like that, this seems very overconfident to me, and I’m surprised if this really was a widespread Leverage view.
They seem to have believed that they can turn people into having Musk level competence. A hundred people with Musk level competence might execute a plan like the one Cummings proposed to successfully take over the US government.
If they really could transform people in that way, that might be reasonable. Stories like Zoe’s however suggests that they didn’t really have an ability to do that and instead their experiments dissolved into strange infighting and losing touch with reality.
I really don’t like this. Could I see the NDA somehow? If the wording equally forbids sharing good and bad stuff about Leverage, then I’m much less bothered by this. Likewise if the wording forbids going into certain details, but lets former staff criticize Leverage at a sufficient level of abstraction.
To me, saying that someone is a better philosopher than Kant seems less crazy than saying that saying that someone is a better philosopher than Kant seems crazy.
Isn’t the thing Rob is calling crazy that someone “believed he was learning from Kant himself live across time”, rather than believing that e.g. Geoff Anders is a better philosopher than Kant?
Yeah, I wasn’t talking about the ‘better than Kant’ thing.
Regarding the ‘better than Kant’ thing: I’m not particularly in awe of Kant, so I’m not shocked by the claim that lots of random people have better core philosophical reasoning skills than Kant (even before we factor in the last 240 years of philosophy, psychology, etc. progress, which gives us a big unfair advantage vs. Kant).
The part I’m (really quite) skeptical of is “Geoff is the best philosopher who’s ever lived”. What are the major novel breakthroughs being gestured at here?
CFAR recently hosted a “Speaking for the Dead” event, where a bunch of current and former staff got together to try to name as much as we could of what had happened at CFAR, especially anything that there seemed to have been (conscious or unconscious) optimization to keep invisible.
CFAR is not dead, but we took the name anyhow from Orson Scott Card’s novel by the same name, which has quotes like:
“...and when their loved ones died, a believer would arise beside the grave to be the Speaker for the Dead, and say what the dead one would have said, but with full candor, hiding no faults and pretending no virtues.”
“A strange thing happened then. The Speaker agreed with her that she had made a mistake that night, and she knew when he said the words that it was true, that his judgment was correct. And yet she felt strangely healed, as if simply saying her mistake were enough to purge some of the pain of it. For the first time, then, she caught a glimpse of what the power of speaking might be. It wasn’t a matter of confession, penance, and absolution, like the priests offered. It was something else entirely. Telling the story of who she was, and then realizing that she was no longer the same person. That she had made a mistake, and the mistake had changed her, and now she would not make the mistake again because she had become someone else, someone less afraid, someone more compassionate.”
“… there were many who decided that their life was worthwhile enough, despite their errors, that when they died a Speaker should tell the truth for them.”
CFAR’s “speaking for the dead” event seemed really good to me. Healing, opening up space for creativity. I hope the former members of Leverage are able to do something similar. I really like and appreciate Zoe sharing all these details, and I hope folks can meet her details with other details, all the details, whatever they turn out to have been.
I don’t know what context permits that kind of conversation, but I hope all of us on the outside help create whatever kind of context it is that allows truth to be shared and heard.
I felt strong negative emotions reading the above comment.
I think that the description of CFAR’s recent speaking-for-the-dead leaves readers feeling positive and optimistic and warm-fuzzy about the event, and about its striving for something like whole truth.
I do believe Anna’s report that it was healing and spacious for those who were there, and I share Anna’s hope that something similarly good can happen re: a Leverage conversation.
But I think I see the description of the event as trying to say something like “here’s an example of the sort of good thing that is possible.”
And I wanted anyone updating on that particular example to know that I was invited to the event, and declined the invitation, explaining that I genuinely could not cause myself to believe that I was actually welcome, or that it would be safe for me to be there.
This is a fact about me, not about the event. But it seems relevant, and I believe it changes the impression left by the above comment to be more accurate in a way that feels important.
(I was not the only staff alumnus absent, to be clear.)
I ordinarily would not have left this comment at all, because it feels dangerously … out of control, or something, in that I do not know what the-act-of-having-written-it will do. I do not understand and have no idea how to navigate the social currents here, and am not going to try. I will probably not contribute anything further to this thread unless directly asked by someone like Anna or a moderator.
What caused me to speak up anyway, despite feeling scared and in-over-my-head, was the bit in Anna’s other comment, where she said that she hopes people will not “refrain from sharing true relevant facts, out of fear that others will take them in a politicized way, or will use them as an excuse for false judgments.”
EDIT: for context, I worked at CFAR from October of 2015 to October of 2018, and was its curriculum director and head-of-workshops for two of those three years.
The former curriculum director and head-of-workshops for the Center For Applied Rationality would not be welcome or safe at a CFAR event?
What the **** is going on?
It sounds to me like mission failure, but I suppose it could also just be eccentric people not knowing how to get along (which isn’t so much different?) 😕
It’s not just people not knowing how to get along.
I am trying to navigate between Scylla and Charybdis, here; trying to adhere to normal social norms of live-and-let-live and employers and employees not badmouthing each other without serious justification and so forth. Trying to be honest and candid without starting social wars.
But it’s not just people not knowing how to get along. It’s something much closer to the gestalt of this comment, although please note that I directly replied to that comment with a lot of disagreements on the level of fact.
I had to read this a few times before I pieced it together, so I wanted to make sure to clarify this publicly.
You are NOT saying this public forum is the place for that. Correct?
You are proposing that it might be nice, if someone else pulled this together?
Perhaps as something like a carefully-moderated facebook group, or an event.
(I think this would require a good moderator, or it will generate more drama than it solves. It would have to be someone who does NOT have “Leverage PR firm vibes,” and needs a lot of early clarity about who will not be invited. Also? Work out early what your privacy policy is! And be clear about how much it intends to be reports-oriented or action-oriented, and do not change that status later. People sometimes make these mistakes, and it’s awful.)
Because on the off-chance that you didn’t mean that...
I did have some contact with the Leverage strangeness here. But despite that, I have remarkably few social ties that would keep me from “saying what I think about it.” I still feel seriously reluctant to get into it, on a public forum like this. I imagine that some others would have an even harder time.
That’s right; I am daydreaming of something very difficult being brought together somehow, in person or in writing (probably slightly less easily-visible-across-the-whole-internet writing, if in writing). I’d be interested in helping but don’t have the know-how on my own to pull it off. I agree with you there’re lots of ways to try this and make things worse; I expect it’s key to have very limited ambitions and to be clear about how very much one is not attempting/promising.
I hope folks can meet her details with other details, all the details, whatever they turn out to have been.
This is an agreeable target, and also, it seems like we have to keep open hypotheses under which many kinds of detail are systematically not shared. E.g., if someone spent some years self-flagellating for remembering details that would contradict a narrative, those details might have not fully crystallized into verbalizable memories. So more detail is better, of course, but assuming that the (“default”) asymptote of more detail will be sufficient for anything is fraught, not that anyone made that assumption.
Here’s a long, detailed account of a Leverage experience which, to me, reads as significantly more damning than the above post: https://medium.com/@zoecurzi/my-experience-with-leverage-research-17e96a8e540b
Miscellaneous first-pass thoughts:
I really don’t like this. Could I see the NDA somehow? If the wording equally forbids sharing good and bad stuff about Leverage, then I’m much less bothered by this. Likewise if the wording forbids going into certain details, but lets former staff criticize Leverage at a sufficient level of abstraction.
Otherwise, this seems very epistemically distorting to me, and in a direction that things already tend to be distorted (there’s pressure against people saying bad stuff about their former employer). How am I supposed to form accurate models of Leverage if former employees can’t even publicly say ‘yeah, I didn’t like working at Leverage’??
“It” here refers to ‘taking over the US government’, which I assume means something like ‘have lots of smart aligned EAs with very Leverage-y strategic outlooks rise to the top decision-making ranks of the USG’. If I condition on ‘Leverage staff have a high probability of succeeding here’, then I could imagine that a lot of the factors justifying confidence are things that I don’t know about (e.g., lots of people already in high-ranking positions who are quietly very Leverage-aligned). But absent a lot of hidden factors like that, this seems very overconfident to me, and I’m surprised if this really was a widespread Leverage view.
???? I’m so confused about what happened here. The aliens part (as stated) isn’t a red flag for me, but the Kant thing seem transparently crazy to me. I have to imagine there’s something being lost in translation here, and missing context for why people didn’t immediately see that this person was having a mental breakdown?
A crux for me is that I don’t think Geoff’s philosophy heuristics are that great. He’s a very smart and nimble reasoner, but I’m not aware of any big cool philosophy things from him, and I do think he’s very wrong about the hard problem of consciousness (though admittedly I think this is one of the hardest tests of philosophy-heuristics, and breaks a lot of techniques that normally work elsewhere).
If I updated toward ‘few if any Leveragers really believed Geoff was that amazing a philosopher’, or toward ‘Geoff really is that amazing of a philosopher’, I’d put a lot less weight on the hypothesis ‘Leverage 1.0 was systematically bad-at-epistemics on a bunch of core things they spent lots of time thinking about’.
FWIW, my own experience is that people often miss fairly blatant psychotic episodes; so I’m not sure how Leverage-specific the explanation needs to be for this one. For example, once I came to believe that an acquaintance was having a psychotic episode and suggested he see a psychiatrist; the psychiatrist agreed. A friend who’d observed most of the same data I had asked me how I’d known. I said it was several things, but that the bit where our acquaintance said God was talking to him through his cereal box was one of the tip-offs from my POV. My friend’s response was “oh, I thought that was a metaphor.” I know several different stories like this one, including a later instance where I was among those who missed what in hindsight was fairly blatant evidence that someone was psychotic, none of which involved weird group-level beliefs or practices.
I’d guess that the people in question had a mostly normal air to them during the episode, just starting to say weird things?
Most people’s conception of a psychotic episode probably involves a sense of the person acting like a stereotypical obviously crazy person on the street. Whereas if it’s someone they already know and trust, just acting slightly more eccentric than normal, people seem likely to filter everything the person says through a lens of “my friend’s not crazy so if they do sound crazy, it’s probably a metaphor or else I’m misunderstanding what they’re trying to say”.
Yes.
I would imagine that other people saw his relationship to Kant as something like Kant being a Shoulder Advisor, maybe with additional steps to make it feel more real.
In an enviroment where some people do seances and use crystals to clean negative energy, they might have thought that if you believe in the realness of rituals things get more effective. So someone who manages to get to the position to literally believe they are talking to Kant instead of just to some abstraction of their mind of Kant being more powerful.
I do think they messed up here by not understanding why truth is valuable, but I can see how things played out that way.
They seem to have believed that they can turn people into having Musk level competence. A hundred people with Musk level competence might execute a plan like the one Cummings proposed to successfully take over the US government.
If they really could transform people in that way, that might be reasonable. Stories like Zoe’s however suggests that they didn’t really have an ability to do that and instead their experiments dissolved into strange infighting and losing touch with reality.
Interestingly my comment further down that asks for details about the information sharing practices has very little upvotes ( https://www.lesswrong.com/posts/Kz9zMgWB5C27Pmdkh/common-knowledge-about-leverage-research-1-0?commentId=qqAFyqZrfAdHsuBz4)
It seems like most people reading this thread are more interested in upvoting judgements then in requests for information.
To me, saying that someone is a better philosopher than Kant seems less crazy than saying that saying that someone is a better philosopher than Kant seems crazy.
Isn’t the thing Rob is calling crazy that someone “believed he was learning from Kant himself live across time”, rather than believing that e.g. Geoff Anders is a better philosopher than Kant?
Yeah, I wasn’t talking about the ‘better than Kant’ thing.
Regarding the ‘better than Kant’ thing: I’m not particularly in awe of Kant, so I’m not shocked by the claim that lots of random people have better core philosophical reasoning skills than Kant (even before we factor in the last 240 years of philosophy, psychology, etc. progress, which gives us a big unfair advantage vs. Kant).
The part I’m (really quite) skeptical of is “Geoff is the best philosopher who’s ever lived”. What are the major novel breakthroughs being gestured at here?
It’s more crazy after you load in the context that people at Leverage think Kant is more impressive than eg Jeremy Bentham.
CFAR recently hosted a “Speaking for the Dead” event, where a bunch of current and former staff got together to try to name as much as we could of what had happened at CFAR, especially anything that there seemed to have been (conscious or unconscious) optimization to keep invisible.
CFAR is not dead, but we took the name anyhow from Orson Scott Card’s novel by the same name, which has quotes like:
CFAR’s “speaking for the dead” event seemed really good to me. Healing, opening up space for creativity. I hope the former members of Leverage are able to do something similar. I really like and appreciate Zoe sharing all these details, and I hope folks can meet her details with other details, all the details, whatever they turn out to have been.
I don’t know what context permits that kind of conversation, but I hope all of us on the outside help create whatever kind of context it is that allows truth to be shared and heard.
I felt strong negative emotions reading the above comment.
I think that the description of CFAR’s recent speaking-for-the-dead leaves readers feeling positive and optimistic and warm-fuzzy about the event, and about its striving for something like whole truth.
I do believe Anna’s report that it was healing and spacious for those who were there, and I share Anna’s hope that something similarly good can happen re: a Leverage conversation.
But I think I see the description of the event as trying to say something like “here’s an example of the sort of good thing that is possible.”
And I wanted anyone updating on that particular example to know that I was invited to the event, and declined the invitation, explaining that I genuinely could not cause myself to believe that I was actually welcome, or that it would be safe for me to be there.
This is a fact about me, not about the event. But it seems relevant, and I believe it changes the impression left by the above comment to be more accurate in a way that feels important.
(I was not the only staff alumnus absent, to be clear.)
I ordinarily would not have left this comment at all, because it feels dangerously … out of control, or something, in that I do not know what the-act-of-having-written-it will do. I do not understand and have no idea how to navigate the social currents here, and am not going to try. I will probably not contribute anything further to this thread unless directly asked by someone like Anna or a moderator.
What caused me to speak up anyway, despite feeling scared and in-over-my-head, was the bit in Anna’s other comment, where she said that she hopes people will not “refrain from sharing true relevant facts, out of fear that others will take them in a politicized way, or will use them as an excuse for false judgments.”
EDIT: for context, I worked at CFAR from October of 2015 to October of 2018, and was its curriculum director and head-of-workshops for two of those three years.
The former curriculum director and head-of-workshops for the Center For Applied Rationality would not be welcome or safe at a CFAR event?
What the **** is going on?
It sounds to me like mission failure, but I suppose it could also just be eccentric people not knowing how to get along (which isn’t so much different?) 😕
It’s not just people not knowing how to get along.
I am trying to navigate between Scylla and Charybdis, here; trying to adhere to normal social norms of live-and-let-live and employers and employees not badmouthing each other without serious justification and so forth. Trying to be honest and candid without starting social wars.
But it’s not just people not knowing how to get along. It’s something much closer to the gestalt of this comment, although please note that I directly replied to that comment with a lot of disagreements on the level of fact.
I had to read this a few times before I pieced it together, so I wanted to make sure to clarify this publicly.
You are NOT saying this public forum is the place for that. Correct?
You are proposing that it might be nice, if someone else pulled this together?
Perhaps as something like a carefully-moderated facebook group, or an event.
(I think this would require a good moderator, or it will generate more drama than it solves. It would have to be someone who does NOT have “Leverage PR firm vibes,” and needs a lot of early clarity about who will not be invited. Also? Work out early what your privacy policy is! And be clear about how much it intends to be reports-oriented or action-oriented, and do not change that status later. People sometimes make these mistakes, and it’s awful.)
Because on the off-chance that you didn’t mean that...
I did have some contact with the Leverage strangeness here. But despite that, I have remarkably few social ties that would keep me from “saying what I think about it.” I still feel seriously reluctant to get into it, on a public forum like this. I imagine that some others would have an even harder time.
That’s right; I am daydreaming of something very difficult being brought together somehow, in person or in writing (probably slightly less easily-visible-across-the-whole-internet writing, if in writing). I’d be interested in helping but don’t have the know-how on my own to pull it off. I agree with you there’re lots of ways to try this and make things worse; I expect it’s key to have very limited ambitions and to be clear about how very much one is not attempting/promising.
This is an agreeable target, and also, it seems like we have to keep open hypotheses under which many kinds of detail are systematically not shared. E.g., if someone spent some years self-flagellating for remembering details that would contradict a narrative, those details might have not fully crystallized into verbalizable memories. So more detail is better, of course, but assuming that the (“default”) asymptote of more detail will be sufficient for anything is fraught, not that anyone made that assumption.