Off the cuff thoughts from me listening to the Twitch conversation between Anna and Geoff:
I think Geoff, more than he’s seeing clearly, disagrees or at least in the past disagreed with the claim that using narratives to boost morale—specifically, deemphasizing information that contradicts a narrative plan—is basically just bad in the long run. Would be better to have deeper understanding of what morale is.
Geoff describes being harmed by some sort of initial rejection by the rationality/EA community (around 2011? 2010?). This suggests, to me, a (totally conjectural!) story where he got into an escalating narrative cold war with the rationality community: first he perceives (possibly correctly) that the community rejects him, and thereby cuts off his ability to work with people for projects he thinks are good; then, he corrects for this with narrative pushback—basically, firmly reemphasizing his positive vision or whatever. Then people in the community sense this as narrative distortion / deception, and react (more or less consciously) with further counter-distortion. (Where the mechanism is like, they sense something fishy but don’t know how to say “Geoff is slightly distorting things about Leverage’s plans”, so instead they want people to just not work with Geoff; but they can’t just tell people to do that, so they distort facts about Geoff/Leverage to cause others to take their prefered actions; etc.)
[ETA: sorry for all the caveats… specifically, I do use judgy language, but don’t endorse the judgements, but don’t want to change the language.] [The following if taken as a judgement is very harsh and basically unfair, and it would suck to punish Geoff for having conversations like this. So please don’t take it as a judgement. I want to get a handle on what’s up with Geoff, so I want to describe his behavior. Maybe this is bad, LMK if you think so.] It was often hard to listen to Geoff. He seemed to talk in long, apparently low content sentences with lots of hemming and hawing and attention to appearance, and lots of very general statements that seemed to not address precisely the topic. (Again this is unfairly harsh if taken as a judgement, and also he was talking in front of 50 people, sort of.)
Anna says there were in the early 2010s rumors that Leverage was trying to fundraise from “other people’s donors”. And that Leverage/Geoff was trying to recruit, whether ideologically or employfully, employees of other EA/rationality orgs.
I didn’t hear anything that strongly confirms or denies adversarial hypotheses like “Geoff was fairly actively doing something pretty distortiony in Leverage that caused harm, and is sort of hiding this by downplaying / redirecting attention / etc.”.
Broadly it would be really good to understand better how to have world-saving narratives and such, especially ones that can recruit and retain political will if they really ought to, without narrative fraud / information cascades / etc.
Thanks! I would love follow-up on LW to the twitch stream, if anyone wants to. There were a lot of really interesting things being said in the text chat that we didn’t manage to engage with, for example. Although unfortunately the recording was lost, which is unfortunate because IMO it was a great conversation.
TekhneMakre writes:
This suggests, to me, a (totally conjectural!) story where [Geoff] got into an escalating narrative cold war with the rationality community: first he perceives (possibly correctly) that the community rejects him…
This seems right to me
Anna says there were in the early 2010s rumors that Leverage was trying to fundraise from “other people’s donors”. And that Leverage/Geoff was trying to recruit, whether ideologically or employfully, employees of other EA/rationality orgs.
Yes. My present view is that Geoff’s reaching out to donors here was legit, and my and others’ complaints were not; donors should be able to hear all the pitches, and it’s messed up to think of “person reached out to donor X to describe a thingy X might want to donate to” as a territorial infringement.
This seems to me like an example of me and others escalating the “narrative cold war” that you mention.
[Geoff] seemed to talk in long, apparently low content sentences with lots of hemming and hawing and attention to appearance…
I noticed some of this, though less than I might’ve predicted from the background context in which Geoff was, as you note, talking to 50 people, believing himself to be recorded, and in an overall social context in which a community he has long been in a “narrative cold war” with (under your hypothesis, and mine) was in the midst of trying to decide whether to something-like scapegoat him.
I appreciate both that you mentioned your perception (brought it into text rather than subtext, where we can reason about it, and can try to be conscious of all the things together), and that you’re trying to figure out how to incentivize and not disincentivize Geoff’s choice to do the video (which IMO shared a bunch of good info).
I’d like to zoom in on an example that IMO demonstrates that the causes of the “hemming and hawing” are sometimes (probably experience-backed) mistrust of the rationalist community as a [context that is willing to hear and fairly evaluate his actual evidence], rather than, say, desire for the truth to be hidden:
At one point toward the end of the twitch, Geoff was responding to a question about how we got from a pretty cooperative state in ~2013, and said something kinda like “… I’m trying to figure out how to say this without sounding like I’m being unfair your side of things,” or something, and I was like “maybe just don’t, and I or others can disagree if we think you’re wrong,” and then he sort of went “okay, if you’re asking for it” and stopped hemming and hawing and told a simple and direct story about how in the early days of 2011-2014, Leverage did a bunch of things to try to cause specific collaborations that would benefit particular other groups (THINK, the original EA leaders gathering in the Leverage house in 2013, the 2014 retreat + summit, a book launch party for ‘Our Final Invention’ co-run with SingInst, some general queries about what kind of collaborations folks might want, early attempts to merge with SingInst and with 80k), and how he would’ve been interested in and receptive to other bids for common projects if I or others had brought him some. And I was like “yes, that matches my memory and perception; I remember you and Leverage seeming unusually interested in getting specific collaborations or common projects that might support your goals + other groups’ goals at once, going, and more than other groups, and trying to support cooperation in this way” and he seemed surprised that I would acknowledge this.
So, I think part of the trouble is that Geoff didn’t have positive expectations of us as a context in which to truth-seek together.
One partial contributor to this expectation of Geoff’s, I would guess, is the pattern via which (in my perception) the rationalist community sometimes decides peoples’ epistemics/etc. are “different and bad” and then distances from them, punishes those who don’t act as though we need to distance from them, etc., often in a manner that can seem kinda drastic and all-or-nothing, rather than docking points proportional to what it indicates about a person’s likely future ability to share useful thoughts in a milder-mannered fashion. For example, during a panel discussion at the (Leverage-run) 2014 EA Summit, in front of 200 people, I asked Geoff aloud whether he in fact thought that sticking a pole though someone’s head (a la Phineas Gage) would have no effect on their cognition except via their sense-perception. Geoff answered “yes”, as I expected since he’d previously mentioned this view. And… there was a whole bunch of reaction. E.g., Habryka, in the twitch chat, mentioned having been interning with Leverage at the time of that panel conversation, and said “[that bit of panel conversation] caused me nightmares… because I was interning at Leverage at the time, and it made me feel very alienated from my environment. And felt like some kind of common ground was pulled out from under me.”
I for many years often refrained from sharing some of positive views/data/etc. I had about Leverage, for fear of being [judged or something] for it. (TBC, I had both positive and negative views, and some error bars. But Leverage looked to me like well-meaning people who were trying a hard-core something that might turn out cool, and that was developing interesting techniques and models via psychological research, and I mostly refrained from saying this because I was cowardly about it in response to social pressure. … in addition to my usual practice of sometimes refraining from sharing some of my hesitations about the place, as about most places, in a flinchy attempt to avoid conflict.)
I didn’t hear anything that strongly confirms or denies adversarial hypotheses like “Geoff was fairly actively doing something pretty distortiony in Leverage that caused harm, and is sort of hiding this by downplaying / redirecting attention / etc.”.
My guess is that he was and is at least partially some of doing this, in addition to making an earnest (and better than I’d expected on generic-across-people priors) effort to share true things. Re: the past dynamics, I and IMO others were also doing actively distortionary stuff, and I think the Geoff’s choices, and mine and others’, need to be understood together, as similar responses to a common landscape.
As I mentioned in the twitch that alas didn’t get recorded, in ~2008-2014, ish, somehow a lot of different EA and rationality and AI risk groups felt like allies and members of a common substantive community, at least in my perception (including my perception of the social context that I imagined lots of other peopl were in. And later on, most seemed to me to kinda give up on most of the others, opting still for a social surface of cooperation/harmony, but without any deep hope in anyone else of the sort that might support building common infrastructure, really working out any substantive disagreements (with tools of truth-seeking rather than only truce-seeking/surface-harmony-preservation, etc.). (With some of the together-ness getting larger over time in the early years, and then with things drifting apart again.) I’m really interested in whether that transition matches others’ perceptions, and, if so, what y’all think the causes were. IMO it was partly about what I’ve been calling “narrative addiction” and “narrative pyramid schemes,” which needs elaboration rather than a set of phrases (I tried this a bit in the lost twitch video) but I need to go now so may try it later.
I have video of the first 22 minutes at the beginning but at the end switched into my password manager (not showing passwords on screens but a series of sides where I’m registered), so I would want to publically post the video but I’m open to share it to individual people if someone wants to write something referencing it.
I wished I would have been more clear about how to do screen recording in a way that only captures one browser window...
Geoff asked me to leave public publication to him. I send him my video with the last minute (where I had personal information) cutoff. Given that I do think that Geoff made a good effort to be cooperative, and there’s no attempt to assert that something that something happened during the stream that didn’t happen as asserted I see no reason to unilaterally publish something publically.
Noting that it has been 9 days and Geoff has not yet followed though on publishing the 22-minute video. Thankfully, however, a complete audio recording has been made available by another user.
I notice that my comment score above is now zero. I would like others to know that I visited Geoff’s website prior to posting my comment to ensure my comment was accurate, and that these links appeared after my above comment.
It is possible that I missed the link, in which case I apologize, although I am surprised because I did check the website. It doesn’t seem that the web archive can verify timestamps.
I am glad I wrote my comments anyway, so that now the links have been shared here on LW, which I don’t think they were before, and since Lulie’s recording that I linked above seems to have been taken down.
Geoff was interested in publishing a transcript and a video, so I think Geoff would be happy with you publishing the audio from the recording you have.
Hope to see this posted soon! I missed the first hour of the Twitch video. (Though I’m guessing the part I saw, Geoff and Anna talking, was the most valuable part.)
I have a recording of 22 minutes. The last minute includes me switching into my password manager and thus I cut it off from the video that I passed on.
I don’t have anything after the 22 minute mark. I have a recording of 22 minutes and passed on 21 minutes of it.
At the time, I didn’t want to focus my cognitive resources at that point on figuring out recording but on the actual content (and you actually see my writing my comment ;) in the video ).
but without any deep hope in anyone else of the sort that might support building common infrastructure, really working out any substantive disagreements
If this is true, it does strike me as important and interesting.
what y’all think the causes were
Speaking from a very abstract viewpoint not strongly grounded in observations, I’ll speculate:
any deep hope
One contributor, naturally, would be fear of false hope. One is (correctly) afraid of hope because hope somewhat entails investment and commitment. Fear of false hope could actually make hope be genuinely false, even when there could have been true hope. This happens because hope is to some extent a decision, so *expecting* you and others in the future to not collaborate in some way, also *constitutes a decision* to not collaborate in that way. If you will in the future behave in accordance with a plan, then it’s probably correct to behave now in accordance with the plan; and if you will not, then it’s probably correct to not now. (I tried to meditate on this in the footnotes to my post Hope and False Hope.) (Obviously most things aren’t very subject to this belief-plan mixing, and things where we can separate beliefs from plans are very useful for building foundations, but some non-separable things are important, e.g. open-ended collaboration.)
This feels maybe related to a comment you Anna made in the conversation about Geoff seeming somewhat high on a dimension of manic-ness or something, and he said others have said he seems hypomanic. The story being, Geoff is more hopeful and hope-based in general, explaining why he sought collaboration, and caused collective hope in EA, and ended up feeling he had to defend his org’s hope against hope-destroyers (which hope he referred to as “morale”).
working out any substantive disagreements
I kind of get the impression, based on public conversations, that some people (e.g. Eliezer) get stuck with disagreements because the real reasons for their beliefs are ideas that they don’t want to spread, e.g. ideas about how intelligence works. I’m thinking, for example, of Yudkowsky-Christiano-Hanson-Drexler disagreements, and also of disagreements about likely timelines. Is that a significant part of it?
truce-seeking/surface-harmony-preservation
I guess this an obvious hypothesis, but worth stating: to the extent that people viewed things as zero-sum around recruiting mind-share, and other things beholden to third parties like funding or relationships to non-EA/x-risk orgs, there’s an incentive to avoid public fights (which would be negative sum for the combatants), but avoid updating on core beliefs (which would “hurt” the updater, in terms of mind-share). Related to the thing about fundraising to “our donors” and poaching employees. It’d be nice to be clearer on who’s lying to whom in this scenario. Org leaders are lying to donors, to employees, to other orgs, to themselves… basically everyone I guess…
I imagine (even more speculatively) there being a sort of deep ambiguity about supposedly private conversations aimed at truth-seeking, where there’s a lot of actual intended truth seeking, but also there’s the specter of “If I update too much about these background founding assumptions of my strategy, I’ll have to start from scratch and admit to everyone I was deeply mistaken”, as well as “If I can get the other person to deeply update, that makes the environment more hospitable to my strategy”, which might lead one to direct attention away from one’s own cruxes.
(I also feel like there’s something about specialization or commitment that’s maybe playing in to all this. On the one hand, people with something to protect want to deeply update and do something else if their foundational strategic beliefs are wrong; on the other hand, throwing out your capital is maybe bad policy. E.g., Elon Musk didn’t drop his major projects upon realizing about AI risk, and that’s not obviously a mistake?)
Geoff describes being harmed by some sort of initial rejection by the rationality/EA community (around 2011? 2010?).
One of the interesting things about that timeframe is that a lot of the stuff is online; here’s the 2012 discussion (Jan 9th, Jan 10th, Sep 19th), for example. (I tried to find his earliest comment that I remembered, but I don’t think it was with the Geoff_Anders account or it wasn’t on LessWrong; I think it was before Leverage got started, and people responded pretty skeptically then also?)
One takeaway: Eliezer’s interaction with Geoff does seem like Eliezer was making some sort of mistake. Not sure what the core is, but, one part is like conflating [evidence, the kind that can be interpersonally verified] with [evidence, the kind that accumulates subconsciously as many abstract percepts and heuristics, which can be observably useful while still pre-theoretic, pre-legible]. Like, maybe Eliezer wants to only talk with people where either (1) they already have enough conceptual overlap that abstract cutting-edge theories also always cash out as perceivable predictions, or (2) aren’t trying to share pre-legible theories. But that’s different from Geoff making some terrible incurable mistake of reasoning. (Terrible incurable mistakes are certainly correlated with illegibility, but that’s not something to Goodhart.)
I’m sort of surprised that you’d interpret that as a mistake. It seems to me like Eliezer is running a probabilistic strategy, which has both type I and type II errors, and so a ‘mistake’ is something like “setting the level wrong to get a bad balance of errors” instead of “the strategy encountered an error in this instance.” But also I don’t have the sense that Eliezer was making an error.
. It seems to me like Eliezer is running a probabilistic strategy
It sounds like this describes every strategy? I guess you mean, he’s explicitly taking into account that he’ll make errors, and playing the probabilities to get good expected value. So this makes sense, like I’m not saying he was making a strategic mistake by not, say, working with Geoff. I’m saying:
(internally) Well this is obviously wrong. Minds just don’t work by those sorts of bright-line psychoanalytic rules written out in English, and proposing them doesn’t get you anywhere near the level of an interesting cognitive algorithm.[...]
(out loud) What does CT say I should experience seeing, that existing cognitive science wouldn’t tell me to expect?
Geoff: (Something along the lines of “CT isn’t there yet”[...])
(out loud) Okay, then I don’t believe in CT because without evidence there’s no way you could know it even if it was true.
sounds like he’s conflating shareable and non-shareable evidence. Geoff could have seen a bunch of stuff and learned heuristics that he couldn’t articulately express other than with silly-seeming “bright-line psychoanalytic rules written out in English”. Again, it can make sense to treat this as “for my purposes, equivalent to being obviously wrong”. But like, it’s not really equivalent, you just *don’t know* whether the person has hidden evidence.
Even if all you have is a bunch of stuff and learned heuristics, you should be able to make testable predictions with them. Otherwise, how can you tell whether they’re any good or not?
Whether the evidence that persuaded you is sharable or not doesn’t affect this. For example, you might have a prior that a new psychotherapy technique won’t outperform a control because you’ve read like 30 different cases where a leading psychiatrist invented a new therapy technique, reported great results, and then couldn’t train anyone else to get the same results he did. That’s my prior, and I suspect it’s Eliezer’s, but if I wanted to convince you of it I’d have a tough time because there’s not really a single crux, just those 30 different cases that slowly accumulated. And yet, even though I can’t share the source of my belief, I can use it to make concrete testable predictions: when they do an RCT for the 31st therapy technique, it won’t outperform the control.
Geoff-in-Eliezer’s-ancedote has not reached this point. This is especially bad for a developing theory: if Geoff makes a change to CT, how will he tell if the new CT is better or worse than the old one? Geoff-replying-to-Eliezer takes this criticism seriously, and says he can make concrete, if narrow, predictions about specific people he’s charted.
you should be able to make testable predictions with them
Certainly. But you might not be able to make testable predictions for which others will readily agree with your criteria for judgement. In the exchange, Geoff gives some “evidence”, and in other places he gives additional “evidence”. It’s not really convincing to me, but it at least has the type signature of evidence. Eliezer responds:
Which sounds a lot like standard cognitive dissonance theory
This is eliding that Geoff probably has significant skill in identifying more detail of how beliefs and goals interact, beyond just what someone would know if they heard about cognitive dissonance theory. Like basically I’m saying that if Eliezer sat with Geoff for a few hours through a few sessions of Geoff doing his thing with some third person, Eliezer would see Geoff behave in a way that suggests falsifiable understanding that Eliezer doesn’t have. (Again, not saying he should have done that or anything.)
Well, the video is lost. But my friend Ben Pace (do you know him? he is great) was kind enough to take notes on what he said specifically in response to my question.
My question was something like: “Why do you think some people are afraid of retaliation from you? Have you made any threats? Have you ever retaliated against a Leverage associate?” This is not the exact wording but close enough. I used the words “spiteful, retaliatory, or punishing” so he repeats that in his answer.
I also explicitly told him he didn’t have to answer any of these questions, like I wasn’t demanding him to answer them.
I am pasting Geoff’s response below.
Great questions.
Um.
Off the top of my head I don’t recall spiteful retaliatory or punishing actions. Um. I do think that I… There’s gotta be some other category of actions taken in anger where… I can think of angry remarks that I’ve made, absolutely. I can think of some actions that don’t pertain to Leverage associates that after thinking about for a while I realized there was something I was pretty angry about. In general I try to be really constructive, there’s definitely, let’s see, so… There’s definitely a mode that, it’s like, I like to think of all sorts of different possibilities of things to do, for example this was for EAG a while back, we were going to go and table at EAG and see if there’s anyone who is good to hire, we received word from CEA that we weren’t allowed to table there, super mad about that because we created the EA Summit series and handed it to CEA, being disinvited from the thing we started, I was really mad “let’s go picket and set up in front of EAG and tell people about htis”, y’know a number of people responded to that suggestion really negatively, and… Maybe the thing I want to say is I think there’s something like appropriate levels of response, and the thing I really want to do is to have appropriate levels of response. It’s super hard to never get mad or never be insulted but the thing I try super hard to do is to get to the point where there’s calibrated response. So maybe y’know there’s something in there… I have been in fact rly surprised when people talked about extreme retaliation, I’m like “What.” (!).
There’s definitely a line of thought I’ve seen around projects where they deal with important things, where people are like “The project is so important we must do anything to protect it” which I don’t agree with, y’know, I shut down Leverage because I talked to someone who was suffering too much and I was like “No” and then that was that.
Anna made a relevant follow-up question. She said something like: I expect picketing to be [a more balanced response] because it’s a public action. What about [non-public] (hidden) acts of retaliation?
I saw some of his reaction to this before my internet cut out again. (I think he could have used a hug in that moment… or maybe just me, maybe I could use a hug right now.) 😣
From the little glimpses I got (pretty much only during the first hour Q&A section), I got this sense (this is my own feelings and intuitions speaking):
I did not sense him being ‘in cooperate mode’ on the object level, but he seemed to be ‘picking cooperate’ on a meta level. He was trying to act according to good principles. E.g. by doing the video at all, and the way he tried to answer Qs by saying only true things. He tried not to come from a defensive place.
He seemed to keep to his own ‘side of the street’. Did not try to make claims about others, did not really offer models of others, did not speculate. I think he may have also been doing the same thing with the people in the chat? (I dunno tho, I didn’t see 90%.) Seems ‘cleaner’ to do it this way and avoids a lot of potential issues (like saying something that’s someone else’s to say). But meh, it’s also too bad we didn’t get to see his models about the people.
[ETA: sorry for all the caveats… specifically, I do use judgy language, but don’t endorse the judgements, but don’t want to change the language.] [The following if taken as a judgement is very harsh and basically unfair, and it would suck to punish Geoff for having conversations like this. So please don’t take it as a judgement. I want to get a handle on what’s up with Geoff, so I want to describe his behavior. Maybe this is bad, LMK if you think so.] It was often hard to listen to Geoff. He seemed to talk in long, apparently low content sentences with lots of hemming and hawing and attention to appearance, and lots of very general statements that seemed to not address precisely the topic. (Again this is unfairly harsh if taken as a judgement, and also he was talking in front of 50 people, sort of.)
I don’t think it’s bad of you. It seemed to me that he was deflecting or redirecting many of the points Anna was trying to get at.
Off the cuff thoughts from me listening to the Twitch conversation between Anna and Geoff:
I think Geoff, more than he’s seeing clearly, disagrees or at least in the past disagreed with the claim that using narratives to boost morale—specifically, deemphasizing information that contradicts a narrative plan—is basically just bad in the long run. Would be better to have deeper understanding of what morale is.
Geoff describes being harmed by some sort of initial rejection by the rationality/EA community (around 2011? 2010?). This suggests, to me, a (totally conjectural!) story where he got into an escalating narrative cold war with the rationality community: first he perceives (possibly correctly) that the community rejects him, and thereby cuts off his ability to work with people for projects he thinks are good; then, he corrects for this with narrative pushback—basically, firmly reemphasizing his positive vision or whatever. Then people in the community sense this as narrative distortion / deception, and react (more or less consciously) with further counter-distortion. (Where the mechanism is like, they sense something fishy but don’t know how to say “Geoff is slightly distorting things about Leverage’s plans”, so instead they want people to just not work with Geoff; but they can’t just tell people to do that, so they distort facts about Geoff/Leverage to cause others to take their prefered actions; etc.)
[ETA: sorry for all the caveats… specifically, I do use judgy language, but don’t endorse the judgements, but don’t want to change the language.] [The following if taken as a judgement is very harsh and basically unfair, and it would suck to punish Geoff for having conversations like this. So please don’t take it as a judgement. I want to get a handle on what’s up with Geoff, so I want to describe his behavior. Maybe this is bad, LMK if you think so.] It was often hard to listen to Geoff. He seemed to talk in long, apparently low content sentences with lots of hemming and hawing and attention to appearance, and lots of very general statements that seemed to not address precisely the topic. (Again this is unfairly harsh if taken as a judgement, and also he was talking in front of 50 people, sort of.)
Anna says there were in the early 2010s rumors that Leverage was trying to fundraise from “other people’s donors”. And that Leverage/Geoff was trying to recruit, whether ideologically or employfully, employees of other EA/rationality orgs.
I didn’t hear anything that strongly confirms or denies adversarial hypotheses like “Geoff was fairly actively doing something pretty distortiony in Leverage that caused harm, and is sort of hiding this by downplaying / redirecting attention / etc.”.
Broadly it would be really good to understand better how to have world-saving narratives and such, especially ones that can recruit and retain political will if they really ought to, without narrative fraud / information cascades / etc.
Thanks! I would love follow-up on LW to the twitch stream, if anyone wants to. There were a lot of really interesting things being said in the text chat that we didn’t manage to engage with, for example. Although unfortunately the recording was lost, which is unfortunate because IMO it was a great conversation.
TekhneMakre writes:
This seems right to me
Yes. My present view is that Geoff’s reaching out to donors here was legit, and my and others’ complaints were not; donors should be able to hear all the pitches, and it’s messed up to think of “person reached out to donor X to describe a thingy X might want to donate to” as a territorial infringement.
This seems to me like an example of me and others escalating the “narrative cold war” that you mention.
I noticed some of this, though less than I might’ve predicted from the background context in which Geoff was, as you note, talking to 50 people, believing himself to be recorded, and in an overall social context in which a community he has long been in a “narrative cold war” with (under your hypothesis, and mine) was in the midst of trying to decide whether to something-like scapegoat him.
I appreciate both that you mentioned your perception (brought it into text rather than subtext, where we can reason about it, and can try to be conscious of all the things together), and that you’re trying to figure out how to incentivize and not disincentivize Geoff’s choice to do the video (which IMO shared a bunch of good info).
I’d like to zoom in on an example that IMO demonstrates that the causes of the “hemming and hawing” are sometimes (probably experience-backed) mistrust of the rationalist community as a [context that is willing to hear and fairly evaluate his actual evidence], rather than, say, desire for the truth to be hidden:
At one point toward the end of the twitch, Geoff was responding to a question about how we got from a pretty cooperative state in ~2013, and said something kinda like “… I’m trying to figure out how to say this without sounding like I’m being unfair your side of things,” or something, and I was like “maybe just don’t, and I or others can disagree if we think you’re wrong,” and then he sort of went “okay, if you’re asking for it” and stopped hemming and hawing and told a simple and direct story about how in the early days of 2011-2014, Leverage did a bunch of things to try to cause specific collaborations that would benefit particular other groups (THINK, the original EA leaders gathering in the Leverage house in 2013, the 2014 retreat + summit, a book launch party for ‘Our Final Invention’ co-run with SingInst, some general queries about what kind of collaborations folks might want, early attempts to merge with SingInst and with 80k), and how he would’ve been interested in and receptive to other bids for common projects if I or others had brought him some. And I was like “yes, that matches my memory and perception; I remember you and Leverage seeming unusually interested in getting specific collaborations or common projects that might support your goals + other groups’ goals at once, going, and more than other groups, and trying to support cooperation in this way” and he seemed surprised that I would acknowledge this.
So, I think part of the trouble is that Geoff didn’t have positive expectations of us as a context in which to truth-seek together.
One partial contributor to this expectation of Geoff’s, I would guess, is the pattern via which (in my perception) the rationalist community sometimes decides peoples’ epistemics/etc. are “different and bad” and then distances from them, punishes those who don’t act as though we need to distance from them, etc., often in a manner that can seem kinda drastic and all-or-nothing, rather than docking points proportional to what it indicates about a person’s likely future ability to share useful thoughts in a milder-mannered fashion. For example, during a panel discussion at the (Leverage-run) 2014 EA Summit, in front of 200 people, I asked Geoff aloud whether he in fact thought that sticking a pole though someone’s head (a la Phineas Gage) would have no effect on their cognition except via their sense-perception. Geoff answered “yes”, as I expected since he’d previously mentioned this view. And… there was a whole bunch of reaction. E.g., Habryka, in the twitch chat, mentioned having been interning with Leverage at the time of that panel conversation, and said “[that bit of panel conversation] caused me nightmares… because I was interning at Leverage at the time, and it made me feel very alienated from my environment. And felt like some kind of common ground was pulled out from under me.”
I for many years often refrained from sharing some of positive views/data/etc. I had about Leverage, for fear of being [judged or something] for it. (TBC, I had both positive and negative views, and some error bars. But Leverage looked to me like well-meaning people who were trying a hard-core something that might turn out cool, and that was developing interesting techniques and models via psychological research, and I mostly refrained from saying this because I was cowardly about it in response to social pressure. … in addition to my usual practice of sometimes refraining from sharing some of my hesitations about the place, as about most places, in a flinchy attempt to avoid conflict.)
My guess is that he was and is at least partially some of doing this, in addition to making an earnest (and better than I’d expected on generic-across-people priors) effort to share true things. Re: the past dynamics, I and IMO others were also doing actively distortionary stuff, and I think the Geoff’s choices, and mine and others’, need to be understood together, as similar responses to a common landscape.
As I mentioned in the twitch that alas didn’t get recorded, in ~2008-2014, ish, somehow a lot of different EA and rationality and AI risk groups felt like allies and members of a common substantive community, at least in my perception (including my perception of the social context that I imagined lots of other peopl were in. And later on, most seemed to me to kinda give up on most of the others, opting still for a social surface of cooperation/harmony, but without any deep hope in anyone else of the sort that might support building common infrastructure, really working out any substantive disagreements (with tools of truth-seeking rather than only truce-seeking/surface-harmony-preservation, etc.). (With some of the together-ness getting larger over time in the early years, and then with things drifting apart again.) I’m really interested in whether that transition matches others’ perceptions, and, if so, what y’all think the causes were. IMO it was partly about what I’ve been calling “narrative addiction” and “narrative pyramid schemes,” which needs elaboration rather than a set of phrases (I tried this a bit in the lost twitch video) but I need to go now so may try it later.
I have video of the first 22 minutes at the beginning but at the end switched into my password manager (not showing passwords on screens but a series of sides where I’m registered), so I would want to publically post the video but I’m open to share it to individual people if someone wants to write something referencing it.
I wished I would have been more clear about how to do screen recording in a way that only captures one browser window...
How about posting the audio?
Geoff asked me to leave public publication to him. I send him my video with the last minute (where I had personal information) cutoff. Given that I do think that Geoff made a good effort to be cooperative, and there’s no attempt to assert that something that something happened during the stream that didn’t happen as asserted I see no reason to unilaterally publish something publically.
Noting that it has been 9 days and Geoff has not yet followed though on publishing the 22-minute video. Thankfully, however, a complete audio recording has been made available by another user.
On https://www.geoffanders.com/ there’s the link to https://www.dropbox.com/s/pt3q5xejglsgrcr/1st%20Twitch%20Stream%20-%20Leverage%20History%20-%20Beginning.mp4?dl=0# so he did follow through.
I notice that my comment score above is now zero. I would like others to know that I visited Geoff’s website prior to posting my comment to ensure my comment was accurate, and that these links appeared after my above comment.
I did indeed misunderstand that! I didn’t downvote, but my misunderstanding did cause me to not upvote.
Geoff wrote me six days ago that he put it on his website.
It is possible that I missed the link, in which case I apologize, although I am surprised because I did check the website. It doesn’t seem that the web archive can verify timestamps.
I am glad I wrote my comments anyway, so that now the links have been shared here on LW, which I don’t think they were before, and since Lulie’s recording that I linked above seems to have been taken down.
Geoff was interested in publishing a transcript and a video, so I think Geoff would be happy with you publishing the audio from the recording you have.
Hope to see this posted soon! I missed the first hour of the Twitch video. (Though I’m guessing the part I saw, Geoff and Anna talking, was the most valuable part.)
Is that to say that you have audio of the whole conversation, and video of the first 20 minutes?
I have a recording of 22 minutes. The last minute includes me switching into my password manager and thus I cut it off from the video that I passed on.
I think the question is: Why not send the audio from after the 22 minute mark? Then we won’t be able to see the password manager.
I don’t have anything after the 22 minute mark. I have a recording of 22 minutes and passed on 21 minutes of it.
At the time, I didn’t want to focus my cognitive resources at that point on figuring out recording but on the actual content (and you actually see my writing my comment ;) in the video ).
Makes sense, thanks for clarifying and for sharing what you have.
A few more half-remembered notes from the conversation: https://www.lesswrong.com/posts/XPwEptSSFRCnfHqFk/zoe-curzi-s-experience-with-leverage-research?commentId=e8vL8nyTGwDLGnR3r#Yrk2375Jt5YTs2CQg
If this is true, it does strike me as important and interesting.
Speaking from a very abstract viewpoint not strongly grounded in observations, I’ll speculate:
One contributor, naturally, would be fear of false hope. One is (correctly) afraid of hope because hope somewhat entails investment and commitment. Fear of false hope could actually make hope be genuinely false, even when there could have been true hope. This happens because hope is to some extent a decision, so *expecting* you and others in the future to not collaborate in some way, also *constitutes a decision* to not collaborate in that way. If you will in the future behave in accordance with a plan, then it’s probably correct to behave now in accordance with the plan; and if you will not, then it’s probably correct to not now. (I tried to meditate on this in the footnotes to my post Hope and False Hope.) (Obviously most things aren’t very subject to this belief-plan mixing, and things where we can separate beliefs from plans are very useful for building foundations, but some non-separable things are important, e.g. open-ended collaboration.)
This feels maybe related to a comment you Anna made in the conversation about Geoff seeming somewhat high on a dimension of manic-ness or something, and he said others have said he seems hypomanic. The story being, Geoff is more hopeful and hope-based in general, explaining why he sought collaboration, and caused collective hope in EA, and ended up feeling he had to defend his org’s hope against hope-destroyers (which hope he referred to as “morale”).
I kind of get the impression, based on public conversations, that some people (e.g. Eliezer) get stuck with disagreements because the real reasons for their beliefs are ideas that they don’t want to spread, e.g. ideas about how intelligence works. I’m thinking, for example, of Yudkowsky-Christiano-Hanson-Drexler disagreements, and also of disagreements about likely timelines. Is that a significant part of it?
I guess this an obvious hypothesis, but worth stating: to the extent that people viewed things as zero-sum around recruiting mind-share, and other things beholden to third parties like funding or relationships to non-EA/x-risk orgs, there’s an incentive to avoid public fights (which would be negative sum for the combatants), but avoid updating on core beliefs (which would “hurt” the updater, in terms of mind-share). Related to the thing about fundraising to “our donors” and poaching employees. It’d be nice to be clearer on who’s lying to whom in this scenario. Org leaders are lying to donors, to employees, to other orgs, to themselves… basically everyone I guess…
I imagine (even more speculatively) there being a sort of deep ambiguity about supposedly private conversations aimed at truth-seeking, where there’s a lot of actual intended truth seeking, but also there’s the specter of “If I update too much about these background founding assumptions of my strategy, I’ll have to start from scratch and admit to everyone I was deeply mistaken”, as well as “If I can get the other person to deeply update, that makes the environment more hospitable to my strategy”, which might lead one to direct attention away from one’s own cruxes.
(I also feel like there’s something about specialization or commitment that’s maybe playing in to all this. On the one hand, people with something to protect want to deeply update and do something else if their foundational strategic beliefs are wrong; on the other hand, throwing out your capital is maybe bad policy. E.g., Elon Musk didn’t drop his major projects upon realizing about AI risk, and that’s not obviously a mistake?)
One of the interesting things about that timeframe is that a lot of the stuff is online; here’s the 2012 discussion (Jan 9th, Jan 10th, Sep 19th), for example. (I tried to find his earliest comment that I remembered, but I don’t think it was with the Geoff_Anders account or it wasn’t on LessWrong; I think it was before Leverage got started, and people responded pretty skeptically then also?)
Thanks!
One takeaway: Eliezer’s interaction with Geoff does seem like Eliezer was making some sort of mistake. Not sure what the core is, but, one part is like conflating [evidence, the kind that can be interpersonally verified] with [evidence, the kind that accumulates subconsciously as many abstract percepts and heuristics, which can be observably useful while still pre-theoretic, pre-legible]. Like, maybe Eliezer wants to only talk with people where either (1) they already have enough conceptual overlap that abstract cutting-edge theories also always cash out as perceivable predictions, or (2) aren’t trying to share pre-legible theories. But that’s different from Geoff making some terrible incurable mistake of reasoning. (Terrible incurable mistakes are certainly correlated with illegibility, but that’s not something to Goodhart.)
I’m sort of surprised that you’d interpret that as a mistake. It seems to me like Eliezer is running a probabilistic strategy, which has both type I and type II errors, and so a ‘mistake’ is something like “setting the level wrong to get a bad balance of errors” instead of “the strategy encountered an error in this instance.” But also I don’t have the sense that Eliezer was making an error.
It sounds like this describes every strategy? I guess you mean, he’s explicitly taking into account that he’ll make errors, and playing the probabilities to get good expected value. So this makes sense, like I’m not saying he was making a strategic mistake by not, say, working with Geoff. I’m saying:
sounds like he’s conflating shareable and non-shareable evidence. Geoff could have seen a bunch of stuff and learned heuristics that he couldn’t articulately express other than with silly-seeming “bright-line psychoanalytic rules written out in English”. Again, it can make sense to treat this as “for my purposes, equivalent to being obviously wrong”. But like, it’s not really equivalent, you just *don’t know* whether the person has hidden evidence.
Even if all you have is a bunch of stuff and learned heuristics, you should be able to make testable predictions with them. Otherwise, how can you tell whether they’re any good or not?
Whether the evidence that persuaded you is sharable or not doesn’t affect this. For example, you might have a prior that a new psychotherapy technique won’t outperform a control because you’ve read like 30 different cases where a leading psychiatrist invented a new therapy technique, reported great results, and then couldn’t train anyone else to get the same results he did. That’s my prior, and I suspect it’s Eliezer’s, but if I wanted to convince you of it I’d have a tough time because there’s not really a single crux, just those 30 different cases that slowly accumulated. And yet, even though I can’t share the source of my belief, I can use it to make concrete testable predictions: when they do an RCT for the 31st therapy technique, it won’t outperform the control.
Geoff-in-Eliezer’s-ancedote has not reached this point. This is especially bad for a developing theory: if Geoff makes a change to CT, how will he tell if the new CT is better or worse than the old one? Geoff-replying-to-Eliezer takes this criticism seriously, and says he can make concrete, if narrow, predictions about specific people he’s charted.
Certainly. But you might not be able to make testable predictions for which others will readily agree with your criteria for judgement. In the exchange, Geoff gives some “evidence”, and in other places he gives additional “evidence”. It’s not really convincing to me, but it at least has the type signature of evidence. Eliezer responds:
This is eliding that Geoff probably has significant skill in identifying more detail of how beliefs and goals interact, beyond just what someone would know if they heard about cognitive dissonance theory. Like basically I’m saying that if Eliezer sat with Geoff for a few hours through a few sessions of Geoff doing his thing with some third person, Eliezer would see Geoff behave in a way that suggests falsifiable understanding that Eliezer doesn’t have. (Again, not saying he should have done that or anything.)
Well, the video is lost. But my friend Ben Pace (do you know him? he is great) was kind enough to take notes on what he said specifically in response to my question.
My question was something like: “Why do you think some people are afraid of retaliation from you? Have you made any threats? Have you ever retaliated against a Leverage associate?” This is not the exact wording but close enough. I used the words “spiteful, retaliatory, or punishing” so he repeats that in his answer.
I also explicitly told him he didn’t have to answer any of these questions, like I wasn’t demanding him to answer them.
I am pasting Geoff’s response below.
Anna made a relevant follow-up question. She said something like: I expect picketing to be [a more balanced response] because it’s a public action. What about [non-public] (hidden) acts of retaliation?
I saw some of his reaction to this before my internet cut out again. (I think he could have used a hug in that moment… or maybe just me, maybe I could use a hug right now.) 😣
From the little glimpses I got (pretty much only during the first hour Q&A section), I got this sense (this is my own feelings and intuitions speaking):
I did not sense him being ‘in cooperate mode’ on the object level, but he seemed to be ‘picking cooperate’ on a meta level. He was trying to act according to good principles. E.g. by doing the video at all, and the way he tried to answer Qs by saying only true things. He tried not to come from a defensive place.
He seemed to keep to his own ‘side of the street’. Did not try to make claims about others, did not really offer models of others, did not speculate. I think he may have also been doing the same thing with the people in the chat? (I dunno tho, I didn’t see 90%.) Seems ‘cleaner’ to do it this way and avoids a lot of potential issues (like saying something that’s someone else’s to say). But meh, it’s also too bad we didn’t get to see his models about the people.
I don’t think it’s bad of you. It seemed to me that he was deflecting or redirecting many of the points Anna was trying to get at.