but without any deep hope in anyone else of the sort that might support building common infrastructure, really working out any substantive disagreements
If this is true, it does strike me as important and interesting.
what y’all think the causes were
Speaking from a very abstract viewpoint not strongly grounded in observations, I’ll speculate:
any deep hope
One contributor, naturally, would be fear of false hope. One is (correctly) afraid of hope because hope somewhat entails investment and commitment. Fear of false hope could actually make hope be genuinely false, even when there could have been true hope. This happens because hope is to some extent a decision, so *expecting* you and others in the future to not collaborate in some way, also *constitutes a decision* to not collaborate in that way. If you will in the future behave in accordance with a plan, then it’s probably correct to behave now in accordance with the plan; and if you will not, then it’s probably correct to not now. (I tried to meditate on this in the footnotes to my post Hope and False Hope.) (Obviously most things aren’t very subject to this belief-plan mixing, and things where we can separate beliefs from plans are very useful for building foundations, but some non-separable things are important, e.g. open-ended collaboration.)
This feels maybe related to a comment you Anna made in the conversation about Geoff seeming somewhat high on a dimension of manic-ness or something, and he said others have said he seems hypomanic. The story being, Geoff is more hopeful and hope-based in general, explaining why he sought collaboration, and caused collective hope in EA, and ended up feeling he had to defend his org’s hope against hope-destroyers (which hope he referred to as “morale”).
working out any substantive disagreements
I kind of get the impression, based on public conversations, that some people (e.g. Eliezer) get stuck with disagreements because the real reasons for their beliefs are ideas that they don’t want to spread, e.g. ideas about how intelligence works. I’m thinking, for example, of Yudkowsky-Christiano-Hanson-Drexler disagreements, and also of disagreements about likely timelines. Is that a significant part of it?
truce-seeking/surface-harmony-preservation
I guess this an obvious hypothesis, but worth stating: to the extent that people viewed things as zero-sum around recruiting mind-share, and other things beholden to third parties like funding or relationships to non-EA/x-risk orgs, there’s an incentive to avoid public fights (which would be negative sum for the combatants), but avoid updating on core beliefs (which would “hurt” the updater, in terms of mind-share). Related to the thing about fundraising to “our donors” and poaching employees. It’d be nice to be clearer on who’s lying to whom in this scenario. Org leaders are lying to donors, to employees, to other orgs, to themselves… basically everyone I guess…
I imagine (even more speculatively) there being a sort of deep ambiguity about supposedly private conversations aimed at truth-seeking, where there’s a lot of actual intended truth seeking, but also there’s the specter of “If I update too much about these background founding assumptions of my strategy, I’ll have to start from scratch and admit to everyone I was deeply mistaken”, as well as “If I can get the other person to deeply update, that makes the environment more hospitable to my strategy”, which might lead one to direct attention away from one’s own cruxes.
(I also feel like there’s something about specialization or commitment that’s maybe playing in to all this. On the one hand, people with something to protect want to deeply update and do something else if their foundational strategic beliefs are wrong; on the other hand, throwing out your capital is maybe bad policy. E.g., Elon Musk didn’t drop his major projects upon realizing about AI risk, and that’s not obviously a mistake?)
A few more half-remembered notes from the conversation: https://www.lesswrong.com/posts/XPwEptSSFRCnfHqFk/zoe-curzi-s-experience-with-leverage-research?commentId=e8vL8nyTGwDLGnR3r#Yrk2375Jt5YTs2CQg
If this is true, it does strike me as important and interesting.
Speaking from a very abstract viewpoint not strongly grounded in observations, I’ll speculate:
One contributor, naturally, would be fear of false hope. One is (correctly) afraid of hope because hope somewhat entails investment and commitment. Fear of false hope could actually make hope be genuinely false, even when there could have been true hope. This happens because hope is to some extent a decision, so *expecting* you and others in the future to not collaborate in some way, also *constitutes a decision* to not collaborate in that way. If you will in the future behave in accordance with a plan, then it’s probably correct to behave now in accordance with the plan; and if you will not, then it’s probably correct to not now. (I tried to meditate on this in the footnotes to my post Hope and False Hope.) (Obviously most things aren’t very subject to this belief-plan mixing, and things where we can separate beliefs from plans are very useful for building foundations, but some non-separable things are important, e.g. open-ended collaboration.)
This feels maybe related to a comment you Anna made in the conversation about Geoff seeming somewhat high on a dimension of manic-ness or something, and he said others have said he seems hypomanic. The story being, Geoff is more hopeful and hope-based in general, explaining why he sought collaboration, and caused collective hope in EA, and ended up feeling he had to defend his org’s hope against hope-destroyers (which hope he referred to as “morale”).
I kind of get the impression, based on public conversations, that some people (e.g. Eliezer) get stuck with disagreements because the real reasons for their beliefs are ideas that they don’t want to spread, e.g. ideas about how intelligence works. I’m thinking, for example, of Yudkowsky-Christiano-Hanson-Drexler disagreements, and also of disagreements about likely timelines. Is that a significant part of it?
I guess this an obvious hypothesis, but worth stating: to the extent that people viewed things as zero-sum around recruiting mind-share, and other things beholden to third parties like funding or relationships to non-EA/x-risk orgs, there’s an incentive to avoid public fights (which would be negative sum for the combatants), but avoid updating on core beliefs (which would “hurt” the updater, in terms of mind-share). Related to the thing about fundraising to “our donors” and poaching employees. It’d be nice to be clearer on who’s lying to whom in this scenario. Org leaders are lying to donors, to employees, to other orgs, to themselves… basically everyone I guess…
I imagine (even more speculatively) there being a sort of deep ambiguity about supposedly private conversations aimed at truth-seeking, where there’s a lot of actual intended truth seeking, but also there’s the specter of “If I update too much about these background founding assumptions of my strategy, I’ll have to start from scratch and admit to everyone I was deeply mistaken”, as well as “If I can get the other person to deeply update, that makes the environment more hospitable to my strategy”, which might lead one to direct attention away from one’s own cruxes.
(I also feel like there’s something about specialization or commitment that’s maybe playing in to all this. On the one hand, people with something to protect want to deeply update and do something else if their foundational strategic beliefs are wrong; on the other hand, throwing out your capital is maybe bad policy. E.g., Elon Musk didn’t drop his major projects upon realizing about AI risk, and that’s not obviously a mistake?)