Geoff had everyone sign an unofficial NDA upon leaving agreeing not to talk badly about Leverage
I really don’t like this. Could I see the NDA somehow? If the wording equally forbids sharing good and bad stuff about Leverage, then I’m much less bothered by this. Likewise if the wording forbids going into certain details, but lets former staff criticize Leverage at a sufficient level of abstraction.
Otherwise, this seems very epistemically distorting to me, and in a direction that things already tend to be distorted (there’s pressure against people saying bad stuff about their former employer). How am I supposed to form accurate models of Leverage if former employees can’t even publicly say ‘yeah, I didn’t like working at Leverage’??
One of my supervisors would regularly talk about this as a daunting but inevitable strategic reality (“obviously we’ll do it, and succeed, but seems hard”).
“It” here refers to ‘taking over the US government’, which I assume means something like ‘have lots of smart aligned EAs with very Leverage-y strategic outlooks rise to the top decision-making ranks of the USG’. If I condition on ‘Leverage staff have a high probability of succeeding here’, then I could imagine that a lot of the factors justifying confidence are things that I don’t know about (e.g., lots of people already in high-ranking positions who are quietly very Leverage-aligned). But absent a lot of hidden factors like that, this seems very overconfident to me, and I’m surprised if this really was a widespread Leverage view.
In retrospect, the guy clearly needed help (he was talking to G-d, believed he was learning from Kant himself live across time, and felt the project was missing the importance of future contact w/ aliens — this was not a joke)
???? I’m so confused about what happened here. The aliens part (as stated) isn’t a red flag for me, but the Kant thing seem transparently crazy to me. I have to imagine there’s something being lost in translation here, and missing context for why people didn’t immediately see that this person was having a mental breakdown?
For example, it wasn’t uncommon to hear “Connection Theory is the One True Theory of Psychology,” “Geoff is the best philosopher who has ever lived” “Geoff is maybe the only mind who has ever existed who is capable of saving the world” or “Geoff’s theoretical process is world-historical.”
A crux for me is that I don’t think Geoff’s philosophy heuristics are that great. He’s a very smart and nimble reasoner, but I’m not aware of any big cool philosophy things from him, and I do think he’s very wrong about the hard problem of consciousness (though admittedly I think this is one of the hardest tests of philosophy-heuristics, and breaks a lot of techniques that normally work elsewhere).
If I updated toward ‘few if any Leveragers really believed Geoff was that amazing a philosopher’, or toward ‘Geoff really is that amazing of a philosopher’, I’d put a lot less weight on the hypothesis ‘Leverage 1.0 was systematically bad-at-epistemics on a bunch of core things they spent lots of time thinking about’.
???? I’m so confused about what happened here. The aliens part (as stated) isn’t a red flag for me, but the Kant thing seem transparently crazy to me. I have to imagine there’s something being lost in translation here, and missing context for why people didn’t immediately see that this person was having a mental breakdown?
FWIW, my own experience is that people often miss fairly blatant psychotic episodes; so I’m not sure how Leverage-specific the explanation needs to be for this one. For example, once I came to believe that an acquaintance was having a psychotic episode and suggested he see a psychiatrist; the psychiatrist agreed. A friend who’d observed most of the same data I had asked me how I’d known. I said it was several things, but that the bit where our acquaintance said God was talking to him through his cereal box was one of the tip-offs from my POV. My friend’s response was “oh, I thought that was a metaphor.” I know several different stories like this one, including a later instance where I was among those who missed what in hindsight was fairly blatant evidence that someone was psychotic, none of which involved weird group-level beliefs or practices.
I’d guess that the people in question had a mostly normal air to them during the episode, just starting to say weird things?
Most people’s conception of a psychotic episode probably involves a sense of the person acting like a stereotypical obviously crazy person on the street. Whereas if it’s someone they already know and trust, just acting slightly more eccentric than normal, people seem likely to filter everything the person says through a lens of “my friend’s not crazy so if they do sound crazy, it’s probably a metaphor or else I’m misunderstanding what they’re trying to say”.
???? I’m so confused about what happened here. The aliens part (as stated) isn’t a red flag for me, but the Kant thing seem transparently crazy to me. I have to imagine there’s something being lost in translation here, and missing context for why people didn’t immediately see that this person was having a mental breakdown?
I would imagine that other people saw his relationship to Kant as something like Kant being a Shoulder Advisor, maybe with additional steps to make it feel more real.
In an enviroment where some people do seances and use crystals to clean negative energy, they might have thought that if you believe in the realness of rituals things get more effective. So someone who manages to get to the position to literally believe they are talking to Kant instead of just to some abstraction of their mind of Kant being more powerful.
I do think they messed up here by not understanding why truth is valuable, but I can see how things played out that way.
If I condition on ‘Leverage staff have a high probability of succeeding here’, then I could imagine that a lot of the factors justifying confidence are things that I don’t know about (e.g., lots of people already in high-ranking positions who are quietly very Leverage-aligned). But absent a lot of hidden factors like that, this seems very overconfident to me, and I’m surprised if this really was a widespread Leverage view.
They seem to have believed that they can turn people into having Musk level competence. A hundred people with Musk level competence might execute a plan like the one Cummings proposed to successfully take over the US government.
If they really could transform people in that way, that might be reasonable. Stories like Zoe’s however suggests that they didn’t really have an ability to do that and instead their experiments dissolved into strange infighting and losing touch with reality.
I really don’t like this. Could I see the NDA somehow? If the wording equally forbids sharing good and bad stuff about Leverage, then I’m much less bothered by this. Likewise if the wording forbids going into certain details, but lets former staff criticize Leverage at a sufficient level of abstraction.
To me, saying that someone is a better philosopher than Kant seems less crazy than saying that saying that someone is a better philosopher than Kant seems crazy.
Isn’t the thing Rob is calling crazy that someone “believed he was learning from Kant himself live across time”, rather than believing that e.g. Geoff Anders is a better philosopher than Kant?
Yeah, I wasn’t talking about the ‘better than Kant’ thing.
Regarding the ‘better than Kant’ thing: I’m not particularly in awe of Kant, so I’m not shocked by the claim that lots of random people have better core philosophical reasoning skills than Kant (even before we factor in the last 240 years of philosophy, psychology, etc. progress, which gives us a big unfair advantage vs. Kant).
The part I’m (really quite) skeptical of is “Geoff is the best philosopher who’s ever lived”. What are the major novel breakthroughs being gestured at here?
Miscellaneous first-pass thoughts:
I really don’t like this. Could I see the NDA somehow? If the wording equally forbids sharing good and bad stuff about Leverage, then I’m much less bothered by this. Likewise if the wording forbids going into certain details, but lets former staff criticize Leverage at a sufficient level of abstraction.
Otherwise, this seems very epistemically distorting to me, and in a direction that things already tend to be distorted (there’s pressure against people saying bad stuff about their former employer). How am I supposed to form accurate models of Leverage if former employees can’t even publicly say ‘yeah, I didn’t like working at Leverage’??
“It” here refers to ‘taking over the US government’, which I assume means something like ‘have lots of smart aligned EAs with very Leverage-y strategic outlooks rise to the top decision-making ranks of the USG’. If I condition on ‘Leverage staff have a high probability of succeeding here’, then I could imagine that a lot of the factors justifying confidence are things that I don’t know about (e.g., lots of people already in high-ranking positions who are quietly very Leverage-aligned). But absent a lot of hidden factors like that, this seems very overconfident to me, and I’m surprised if this really was a widespread Leverage view.
???? I’m so confused about what happened here. The aliens part (as stated) isn’t a red flag for me, but the Kant thing seem transparently crazy to me. I have to imagine there’s something being lost in translation here, and missing context for why people didn’t immediately see that this person was having a mental breakdown?
A crux for me is that I don’t think Geoff’s philosophy heuristics are that great. He’s a very smart and nimble reasoner, but I’m not aware of any big cool philosophy things from him, and I do think he’s very wrong about the hard problem of consciousness (though admittedly I think this is one of the hardest tests of philosophy-heuristics, and breaks a lot of techniques that normally work elsewhere).
If I updated toward ‘few if any Leveragers really believed Geoff was that amazing a philosopher’, or toward ‘Geoff really is that amazing of a philosopher’, I’d put a lot less weight on the hypothesis ‘Leverage 1.0 was systematically bad-at-epistemics on a bunch of core things they spent lots of time thinking about’.
FWIW, my own experience is that people often miss fairly blatant psychotic episodes; so I’m not sure how Leverage-specific the explanation needs to be for this one. For example, once I came to believe that an acquaintance was having a psychotic episode and suggested he see a psychiatrist; the psychiatrist agreed. A friend who’d observed most of the same data I had asked me how I’d known. I said it was several things, but that the bit where our acquaintance said God was talking to him through his cereal box was one of the tip-offs from my POV. My friend’s response was “oh, I thought that was a metaphor.” I know several different stories like this one, including a later instance where I was among those who missed what in hindsight was fairly blatant evidence that someone was psychotic, none of which involved weird group-level beliefs or practices.
I’d guess that the people in question had a mostly normal air to them during the episode, just starting to say weird things?
Most people’s conception of a psychotic episode probably involves a sense of the person acting like a stereotypical obviously crazy person on the street. Whereas if it’s someone they already know and trust, just acting slightly more eccentric than normal, people seem likely to filter everything the person says through a lens of “my friend’s not crazy so if they do sound crazy, it’s probably a metaphor or else I’m misunderstanding what they’re trying to say”.
Yes.
I would imagine that other people saw his relationship to Kant as something like Kant being a Shoulder Advisor, maybe with additional steps to make it feel more real.
In an enviroment where some people do seances and use crystals to clean negative energy, they might have thought that if you believe in the realness of rituals things get more effective. So someone who manages to get to the position to literally believe they are talking to Kant instead of just to some abstraction of their mind of Kant being more powerful.
I do think they messed up here by not understanding why truth is valuable, but I can see how things played out that way.
They seem to have believed that they can turn people into having Musk level competence. A hundred people with Musk level competence might execute a plan like the one Cummings proposed to successfully take over the US government.
If they really could transform people in that way, that might be reasonable. Stories like Zoe’s however suggests that they didn’t really have an ability to do that and instead their experiments dissolved into strange infighting and losing touch with reality.
Interestingly my comment further down that asks for details about the information sharing practices has very little upvotes ( https://www.lesswrong.com/posts/Kz9zMgWB5C27Pmdkh/common-knowledge-about-leverage-research-1-0?commentId=qqAFyqZrfAdHsuBz4)
It seems like most people reading this thread are more interested in upvoting judgements then in requests for information.
To me, saying that someone is a better philosopher than Kant seems less crazy than saying that saying that someone is a better philosopher than Kant seems crazy.
Isn’t the thing Rob is calling crazy that someone “believed he was learning from Kant himself live across time”, rather than believing that e.g. Geoff Anders is a better philosopher than Kant?
Yeah, I wasn’t talking about the ‘better than Kant’ thing.
Regarding the ‘better than Kant’ thing: I’m not particularly in awe of Kant, so I’m not shocked by the claim that lots of random people have better core philosophical reasoning skills than Kant (even before we factor in the last 240 years of philosophy, psychology, etc. progress, which gives us a big unfair advantage vs. Kant).
The part I’m (really quite) skeptical of is “Geoff is the best philosopher who’s ever lived”. What are the major novel breakthroughs being gestured at here?
It’s more crazy after you load in the context that people at Leverage think Kant is more impressive than eg Jeremy Bentham.