Thanks! I would love follow-up on LW to the twitch stream, if anyone wants to. There were a lot of really interesting things being said in the text chat that we didn’t manage to engage with, for example. Although unfortunately the recording was lost, which is unfortunate because IMO it was a great conversation.
TekhneMakre writes:
This suggests, to me, a (totally conjectural!) story where [Geoff] got into an escalating narrative cold war with the rationality community: first he perceives (possibly correctly) that the community rejects him…
This seems right to me
Anna says there were in the early 2010s rumors that Leverage was trying to fundraise from “other people’s donors”. And that Leverage/Geoff was trying to recruit, whether ideologically or employfully, employees of other EA/rationality orgs.
Yes. My present view is that Geoff’s reaching out to donors here was legit, and my and others’ complaints were not; donors should be able to hear all the pitches, and it’s messed up to think of “person reached out to donor X to describe a thingy X might want to donate to” as a territorial infringement.
This seems to me like an example of me and others escalating the “narrative cold war” that you mention.
[Geoff] seemed to talk in long, apparently low content sentences with lots of hemming and hawing and attention to appearance…
I noticed some of this, though less than I might’ve predicted from the background context in which Geoff was, as you note, talking to 50 people, believing himself to be recorded, and in an overall social context in which a community he has long been in a “narrative cold war” with (under your hypothesis, and mine) was in the midst of trying to decide whether to something-like scapegoat him.
I appreciate both that you mentioned your perception (brought it into text rather than subtext, where we can reason about it, and can try to be conscious of all the things together), and that you’re trying to figure out how to incentivize and not disincentivize Geoff’s choice to do the video (which IMO shared a bunch of good info).
I’d like to zoom in on an example that IMO demonstrates that the causes of the “hemming and hawing” are sometimes (probably experience-backed) mistrust of the rationalist community as a [context that is willing to hear and fairly evaluate his actual evidence], rather than, say, desire for the truth to be hidden:
At one point toward the end of the twitch, Geoff was responding to a question about how we got from a pretty cooperative state in ~2013, and said something kinda like “… I’m trying to figure out how to say this without sounding like I’m being unfair your side of things,” or something, and I was like “maybe just don’t, and I or others can disagree if we think you’re wrong,” and then he sort of went “okay, if you’re asking for it” and stopped hemming and hawing and told a simple and direct story about how in the early days of 2011-2014, Leverage did a bunch of things to try to cause specific collaborations that would benefit particular other groups (THINK, the original EA leaders gathering in the Leverage house in 2013, the 2014 retreat + summit, a book launch party for ‘Our Final Invention’ co-run with SingInst, some general queries about what kind of collaborations folks might want, early attempts to merge with SingInst and with 80k), and how he would’ve been interested in and receptive to other bids for common projects if I or others had brought him some. And I was like “yes, that matches my memory and perception; I remember you and Leverage seeming unusually interested in getting specific collaborations or common projects that might support your goals + other groups’ goals at once, going, and more than other groups, and trying to support cooperation in this way” and he seemed surprised that I would acknowledge this.
So, I think part of the trouble is that Geoff didn’t have positive expectations of us as a context in which to truth-seek together.
One partial contributor to this expectation of Geoff’s, I would guess, is the pattern via which (in my perception) the rationalist community sometimes decides peoples’ epistemics/etc. are “different and bad” and then distances from them, punishes those who don’t act as though we need to distance from them, etc., often in a manner that can seem kinda drastic and all-or-nothing, rather than docking points proportional to what it indicates about a person’s likely future ability to share useful thoughts in a milder-mannered fashion. For example, during a panel discussion at the (Leverage-run) 2014 EA Summit, in front of 200 people, I asked Geoff aloud whether he in fact thought that sticking a pole though someone’s head (a la Phineas Gage) would have no effect on their cognition except via their sense-perception. Geoff answered “yes”, as I expected since he’d previously mentioned this view. And… there was a whole bunch of reaction. E.g., Habryka, in the twitch chat, mentioned having been interning with Leverage at the time of that panel conversation, and said “[that bit of panel conversation] caused me nightmares… because I was interning at Leverage at the time, and it made me feel very alienated from my environment. And felt like some kind of common ground was pulled out from under me.”
I for many years often refrained from sharing some of positive views/data/etc. I had about Leverage, for fear of being [judged or something] for it. (TBC, I had both positive and negative views, and some error bars. But Leverage looked to me like well-meaning people who were trying a hard-core something that might turn out cool, and that was developing interesting techniques and models via psychological research, and I mostly refrained from saying this because I was cowardly about it in response to social pressure. … in addition to my usual practice of sometimes refraining from sharing some of my hesitations about the place, as about most places, in a flinchy attempt to avoid conflict.)
I didn’t hear anything that strongly confirms or denies adversarial hypotheses like “Geoff was fairly actively doing something pretty distortiony in Leverage that caused harm, and is sort of hiding this by downplaying / redirecting attention / etc.”.
My guess is that he was and is at least partially some of doing this, in addition to making an earnest (and better than I’d expected on generic-across-people priors) effort to share true things. Re: the past dynamics, I and IMO others were also doing actively distortionary stuff, and I think the Geoff’s choices, and mine and others’, need to be understood together, as similar responses to a common landscape.
As I mentioned in the twitch that alas didn’t get recorded, in ~2008-2014, ish, somehow a lot of different EA and rationality and AI risk groups felt like allies and members of a common substantive community, at least in my perception (including my perception of the social context that I imagined lots of other peopl were in. And later on, most seemed to me to kinda give up on most of the others, opting still for a social surface of cooperation/harmony, but without any deep hope in anyone else of the sort that might support building common infrastructure, really working out any substantive disagreements (with tools of truth-seeking rather than only truce-seeking/surface-harmony-preservation, etc.). (With some of the together-ness getting larger over time in the early years, and then with things drifting apart again.) I’m really interested in whether that transition matches others’ perceptions, and, if so, what y’all think the causes were. IMO it was partly about what I’ve been calling “narrative addiction” and “narrative pyramid schemes,” which needs elaboration rather than a set of phrases (I tried this a bit in the lost twitch video) but I need to go now so may try it later.
I have video of the first 22 minutes at the beginning but at the end switched into my password manager (not showing passwords on screens but a series of sides where I’m registered), so I would want to publically post the video but I’m open to share it to individual people if someone wants to write something referencing it.
I wished I would have been more clear about how to do screen recording in a way that only captures one browser window...
Geoff asked me to leave public publication to him. I send him my video with the last minute (where I had personal information) cutoff. Given that I do think that Geoff made a good effort to be cooperative, and there’s no attempt to assert that something that something happened during the stream that didn’t happen as asserted I see no reason to unilaterally publish something publically.
Noting that it has been 9 days and Geoff has not yet followed though on publishing the 22-minute video. Thankfully, however, a complete audio recording has been made available by another user.
I notice that my comment score above is now zero. I would like others to know that I visited Geoff’s website prior to posting my comment to ensure my comment was accurate, and that these links appeared after my above comment.
It is possible that I missed the link, in which case I apologize, although I am surprised because I did check the website. It doesn’t seem that the web archive can verify timestamps.
I am glad I wrote my comments anyway, so that now the links have been shared here on LW, which I don’t think they were before, and since Lulie’s recording that I linked above seems to have been taken down.
Geoff was interested in publishing a transcript and a video, so I think Geoff would be happy with you publishing the audio from the recording you have.
Hope to see this posted soon! I missed the first hour of the Twitch video. (Though I’m guessing the part I saw, Geoff and Anna talking, was the most valuable part.)
I have a recording of 22 minutes. The last minute includes me switching into my password manager and thus I cut it off from the video that I passed on.
I don’t have anything after the 22 minute mark. I have a recording of 22 minutes and passed on 21 minutes of it.
At the time, I didn’t want to focus my cognitive resources at that point on figuring out recording but on the actual content (and you actually see my writing my comment ;) in the video ).
but without any deep hope in anyone else of the sort that might support building common infrastructure, really working out any substantive disagreements
If this is true, it does strike me as important and interesting.
what y’all think the causes were
Speaking from a very abstract viewpoint not strongly grounded in observations, I’ll speculate:
any deep hope
One contributor, naturally, would be fear of false hope. One is (correctly) afraid of hope because hope somewhat entails investment and commitment. Fear of false hope could actually make hope be genuinely false, even when there could have been true hope. This happens because hope is to some extent a decision, so *expecting* you and others in the future to not collaborate in some way, also *constitutes a decision* to not collaborate in that way. If you will in the future behave in accordance with a plan, then it’s probably correct to behave now in accordance with the plan; and if you will not, then it’s probably correct to not now. (I tried to meditate on this in the footnotes to my post Hope and False Hope.) (Obviously most things aren’t very subject to this belief-plan mixing, and things where we can separate beliefs from plans are very useful for building foundations, but some non-separable things are important, e.g. open-ended collaboration.)
This feels maybe related to a comment you Anna made in the conversation about Geoff seeming somewhat high on a dimension of manic-ness or something, and he said others have said he seems hypomanic. The story being, Geoff is more hopeful and hope-based in general, explaining why he sought collaboration, and caused collective hope in EA, and ended up feeling he had to defend his org’s hope against hope-destroyers (which hope he referred to as “morale”).
working out any substantive disagreements
I kind of get the impression, based on public conversations, that some people (e.g. Eliezer) get stuck with disagreements because the real reasons for their beliefs are ideas that they don’t want to spread, e.g. ideas about how intelligence works. I’m thinking, for example, of Yudkowsky-Christiano-Hanson-Drexler disagreements, and also of disagreements about likely timelines. Is that a significant part of it?
truce-seeking/surface-harmony-preservation
I guess this an obvious hypothesis, but worth stating: to the extent that people viewed things as zero-sum around recruiting mind-share, and other things beholden to third parties like funding or relationships to non-EA/x-risk orgs, there’s an incentive to avoid public fights (which would be negative sum for the combatants), but avoid updating on core beliefs (which would “hurt” the updater, in terms of mind-share). Related to the thing about fundraising to “our donors” and poaching employees. It’d be nice to be clearer on who’s lying to whom in this scenario. Org leaders are lying to donors, to employees, to other orgs, to themselves… basically everyone I guess…
I imagine (even more speculatively) there being a sort of deep ambiguity about supposedly private conversations aimed at truth-seeking, where there’s a lot of actual intended truth seeking, but also there’s the specter of “If I update too much about these background founding assumptions of my strategy, I’ll have to start from scratch and admit to everyone I was deeply mistaken”, as well as “If I can get the other person to deeply update, that makes the environment more hospitable to my strategy”, which might lead one to direct attention away from one’s own cruxes.
(I also feel like there’s something about specialization or commitment that’s maybe playing in to all this. On the one hand, people with something to protect want to deeply update and do something else if their foundational strategic beliefs are wrong; on the other hand, throwing out your capital is maybe bad policy. E.g., Elon Musk didn’t drop his major projects upon realizing about AI risk, and that’s not obviously a mistake?)
Thanks! I would love follow-up on LW to the twitch stream, if anyone wants to. There were a lot of really interesting things being said in the text chat that we didn’t manage to engage with, for example. Although unfortunately the recording was lost, which is unfortunate because IMO it was a great conversation.
TekhneMakre writes:
This seems right to me
Yes. My present view is that Geoff’s reaching out to donors here was legit, and my and others’ complaints were not; donors should be able to hear all the pitches, and it’s messed up to think of “person reached out to donor X to describe a thingy X might want to donate to” as a territorial infringement.
This seems to me like an example of me and others escalating the “narrative cold war” that you mention.
I noticed some of this, though less than I might’ve predicted from the background context in which Geoff was, as you note, talking to 50 people, believing himself to be recorded, and in an overall social context in which a community he has long been in a “narrative cold war” with (under your hypothesis, and mine) was in the midst of trying to decide whether to something-like scapegoat him.
I appreciate both that you mentioned your perception (brought it into text rather than subtext, where we can reason about it, and can try to be conscious of all the things together), and that you’re trying to figure out how to incentivize and not disincentivize Geoff’s choice to do the video (which IMO shared a bunch of good info).
I’d like to zoom in on an example that IMO demonstrates that the causes of the “hemming and hawing” are sometimes (probably experience-backed) mistrust of the rationalist community as a [context that is willing to hear and fairly evaluate his actual evidence], rather than, say, desire for the truth to be hidden:
At one point toward the end of the twitch, Geoff was responding to a question about how we got from a pretty cooperative state in ~2013, and said something kinda like “… I’m trying to figure out how to say this without sounding like I’m being unfair your side of things,” or something, and I was like “maybe just don’t, and I or others can disagree if we think you’re wrong,” and then he sort of went “okay, if you’re asking for it” and stopped hemming and hawing and told a simple and direct story about how in the early days of 2011-2014, Leverage did a bunch of things to try to cause specific collaborations that would benefit particular other groups (THINK, the original EA leaders gathering in the Leverage house in 2013, the 2014 retreat + summit, a book launch party for ‘Our Final Invention’ co-run with SingInst, some general queries about what kind of collaborations folks might want, early attempts to merge with SingInst and with 80k), and how he would’ve been interested in and receptive to other bids for common projects if I or others had brought him some. And I was like “yes, that matches my memory and perception; I remember you and Leverage seeming unusually interested in getting specific collaborations or common projects that might support your goals + other groups’ goals at once, going, and more than other groups, and trying to support cooperation in this way” and he seemed surprised that I would acknowledge this.
So, I think part of the trouble is that Geoff didn’t have positive expectations of us as a context in which to truth-seek together.
One partial contributor to this expectation of Geoff’s, I would guess, is the pattern via which (in my perception) the rationalist community sometimes decides peoples’ epistemics/etc. are “different and bad” and then distances from them, punishes those who don’t act as though we need to distance from them, etc., often in a manner that can seem kinda drastic and all-or-nothing, rather than docking points proportional to what it indicates about a person’s likely future ability to share useful thoughts in a milder-mannered fashion. For example, during a panel discussion at the (Leverage-run) 2014 EA Summit, in front of 200 people, I asked Geoff aloud whether he in fact thought that sticking a pole though someone’s head (a la Phineas Gage) would have no effect on their cognition except via their sense-perception. Geoff answered “yes”, as I expected since he’d previously mentioned this view. And… there was a whole bunch of reaction. E.g., Habryka, in the twitch chat, mentioned having been interning with Leverage at the time of that panel conversation, and said “[that bit of panel conversation] caused me nightmares… because I was interning at Leverage at the time, and it made me feel very alienated from my environment. And felt like some kind of common ground was pulled out from under me.”
I for many years often refrained from sharing some of positive views/data/etc. I had about Leverage, for fear of being [judged or something] for it. (TBC, I had both positive and negative views, and some error bars. But Leverage looked to me like well-meaning people who were trying a hard-core something that might turn out cool, and that was developing interesting techniques and models via psychological research, and I mostly refrained from saying this because I was cowardly about it in response to social pressure. … in addition to my usual practice of sometimes refraining from sharing some of my hesitations about the place, as about most places, in a flinchy attempt to avoid conflict.)
My guess is that he was and is at least partially some of doing this, in addition to making an earnest (and better than I’d expected on generic-across-people priors) effort to share true things. Re: the past dynamics, I and IMO others were also doing actively distortionary stuff, and I think the Geoff’s choices, and mine and others’, need to be understood together, as similar responses to a common landscape.
As I mentioned in the twitch that alas didn’t get recorded, in ~2008-2014, ish, somehow a lot of different EA and rationality and AI risk groups felt like allies and members of a common substantive community, at least in my perception (including my perception of the social context that I imagined lots of other peopl were in. And later on, most seemed to me to kinda give up on most of the others, opting still for a social surface of cooperation/harmony, but without any deep hope in anyone else of the sort that might support building common infrastructure, really working out any substantive disagreements (with tools of truth-seeking rather than only truce-seeking/surface-harmony-preservation, etc.). (With some of the together-ness getting larger over time in the early years, and then with things drifting apart again.) I’m really interested in whether that transition matches others’ perceptions, and, if so, what y’all think the causes were. IMO it was partly about what I’ve been calling “narrative addiction” and “narrative pyramid schemes,” which needs elaboration rather than a set of phrases (I tried this a bit in the lost twitch video) but I need to go now so may try it later.
I have video of the first 22 minutes at the beginning but at the end switched into my password manager (not showing passwords on screens but a series of sides where I’m registered), so I would want to publically post the video but I’m open to share it to individual people if someone wants to write something referencing it.
I wished I would have been more clear about how to do screen recording in a way that only captures one browser window...
How about posting the audio?
Geoff asked me to leave public publication to him. I send him my video with the last minute (where I had personal information) cutoff. Given that I do think that Geoff made a good effort to be cooperative, and there’s no attempt to assert that something that something happened during the stream that didn’t happen as asserted I see no reason to unilaterally publish something publically.
Noting that it has been 9 days and Geoff has not yet followed though on publishing the 22-minute video. Thankfully, however, a complete audio recording has been made available by another user.
On https://www.geoffanders.com/ there’s the link to https://www.dropbox.com/s/pt3q5xejglsgrcr/1st%20Twitch%20Stream%20-%20Leverage%20History%20-%20Beginning.mp4?dl=0# so he did follow through.
I notice that my comment score above is now zero. I would like others to know that I visited Geoff’s website prior to posting my comment to ensure my comment was accurate, and that these links appeared after my above comment.
I did indeed misunderstand that! I didn’t downvote, but my misunderstanding did cause me to not upvote.
Geoff wrote me six days ago that he put it on his website.
It is possible that I missed the link, in which case I apologize, although I am surprised because I did check the website. It doesn’t seem that the web archive can verify timestamps.
I am glad I wrote my comments anyway, so that now the links have been shared here on LW, which I don’t think they were before, and since Lulie’s recording that I linked above seems to have been taken down.
Geoff was interested in publishing a transcript and a video, so I think Geoff would be happy with you publishing the audio from the recording you have.
Hope to see this posted soon! I missed the first hour of the Twitch video. (Though I’m guessing the part I saw, Geoff and Anna talking, was the most valuable part.)
Is that to say that you have audio of the whole conversation, and video of the first 20 minutes?
I have a recording of 22 minutes. The last minute includes me switching into my password manager and thus I cut it off from the video that I passed on.
I think the question is: Why not send the audio from after the 22 minute mark? Then we won’t be able to see the password manager.
I don’t have anything after the 22 minute mark. I have a recording of 22 minutes and passed on 21 minutes of it.
At the time, I didn’t want to focus my cognitive resources at that point on figuring out recording but on the actual content (and you actually see my writing my comment ;) in the video ).
Makes sense, thanks for clarifying and for sharing what you have.
A few more half-remembered notes from the conversation: https://www.lesswrong.com/posts/XPwEptSSFRCnfHqFk/zoe-curzi-s-experience-with-leverage-research?commentId=e8vL8nyTGwDLGnR3r#Yrk2375Jt5YTs2CQg
If this is true, it does strike me as important and interesting.
Speaking from a very abstract viewpoint not strongly grounded in observations, I’ll speculate:
One contributor, naturally, would be fear of false hope. One is (correctly) afraid of hope because hope somewhat entails investment and commitment. Fear of false hope could actually make hope be genuinely false, even when there could have been true hope. This happens because hope is to some extent a decision, so *expecting* you and others in the future to not collaborate in some way, also *constitutes a decision* to not collaborate in that way. If you will in the future behave in accordance with a plan, then it’s probably correct to behave now in accordance with the plan; and if you will not, then it’s probably correct to not now. (I tried to meditate on this in the footnotes to my post Hope and False Hope.) (Obviously most things aren’t very subject to this belief-plan mixing, and things where we can separate beliefs from plans are very useful for building foundations, but some non-separable things are important, e.g. open-ended collaboration.)
This feels maybe related to a comment you Anna made in the conversation about Geoff seeming somewhat high on a dimension of manic-ness or something, and he said others have said he seems hypomanic. The story being, Geoff is more hopeful and hope-based in general, explaining why he sought collaboration, and caused collective hope in EA, and ended up feeling he had to defend his org’s hope against hope-destroyers (which hope he referred to as “morale”).
I kind of get the impression, based on public conversations, that some people (e.g. Eliezer) get stuck with disagreements because the real reasons for their beliefs are ideas that they don’t want to spread, e.g. ideas about how intelligence works. I’m thinking, for example, of Yudkowsky-Christiano-Hanson-Drexler disagreements, and also of disagreements about likely timelines. Is that a significant part of it?
I guess this an obvious hypothesis, but worth stating: to the extent that people viewed things as zero-sum around recruiting mind-share, and other things beholden to third parties like funding or relationships to non-EA/x-risk orgs, there’s an incentive to avoid public fights (which would be negative sum for the combatants), but avoid updating on core beliefs (which would “hurt” the updater, in terms of mind-share). Related to the thing about fundraising to “our donors” and poaching employees. It’d be nice to be clearer on who’s lying to whom in this scenario. Org leaders are lying to donors, to employees, to other orgs, to themselves… basically everyone I guess…
I imagine (even more speculatively) there being a sort of deep ambiguity about supposedly private conversations aimed at truth-seeking, where there’s a lot of actual intended truth seeking, but also there’s the specter of “If I update too much about these background founding assumptions of my strategy, I’ll have to start from scratch and admit to everyone I was deeply mistaken”, as well as “If I can get the other person to deeply update, that makes the environment more hospitable to my strategy”, which might lead one to direct attention away from one’s own cruxes.
(I also feel like there’s something about specialization or commitment that’s maybe playing in to all this. On the one hand, people with something to protect want to deeply update and do something else if their foundational strategic beliefs are wrong; on the other hand, throwing out your capital is maybe bad policy. E.g., Elon Musk didn’t drop his major projects upon realizing about AI risk, and that’s not obviously a mistake?)