Since it’s mostly just pointers to stuff I’ve already said/implied… I’ll throw out a quick comment.
I would like it if somebody started something like a carefully-moderated private Facebook group, mostly of core people who were there, to come to grips with their experiences? I think this could be good.
I am slightly concerned that people who are still in the grips of “Leverage PR campaigning” tendencies, will start trying to take it over or otherwise poison the well? (Edit: Or conversely, that people who still feel really hurt or confused about it might lash out more than I’d wish. I personally, am more worried about the former.) I still think it might be good, overall.
Be sure to be clear EARLY about who you are inviting, and who you are excluding! It changes what people are willing to talk about.
...I am not personally the right person to do this, though.
(It is too easy to “other” me, if that makes sense.)
I feel like one of the only things the public LW thread could do here?
Is ensuring public awareness of some of the unreasonably-strong reality/truth-suppressive pressures that were at play here, that there were some ways in which secrecy agreements were leveraged pretty badly to avoid accountability for harms, and showing a public ramp-down of opportunities to do so in the future.
Along with doing what we can, to signal that we generally stand against people over-simplistically demonizing the people and organizations involved in this.
… unreasonably-strong reality/truth-suppressive pressures that were at play here, that there were some ways in which secrecy agreements were leveraged pretty badly to avoid accountability for harms …
Hmm. This seems worth highlighting.
The NDAs (plus pressure to sign) point to this.
…
( The rest of this might be triggering to anyone who’s been through gaslighting / culty experiences. Blunt descriptions of certain forms of control and subjugation. )
...
The rest of the truth-suppressive measures I can only speculate. Here’s a list of possible speculative mechanisms that come to mind, some of which were corroborated by Zoe’s report but not all:
Group hazing or activities that cause collective shame, making certain things hard to admit to oneself and others (plus, inserting a bucket error where ‘shameful activity’ is bucketed with ‘the whole project’ or something)
This could include implanting group delusions that are shameful to admit.
Threats to one’s physical person or loved ones for revealing things
Threats to one’s reputation or ability to acquire resources for revealing things
Deprivation used to negatively / positively reinforce certain behaviors or stories (“well, if you keep talking like that, we’re gonna have to take your phone / food / place to sleep”)
Gaslighting specific individuals or subgroups (“what you’re experiencing is in your own head; look at other people, they are doing fine, stop being crazy / stop ruining the vibe / stop blocking the project”)
A lot of things could fit into this category.
Causing dissociation. (Thus disconnecting a person from their true yes/no or making it harder for them to discern truth from fiction.) This is very common among modern humans, though, and doesn’t seem as evil-sounding as the other examples. Modern humans are already very dissociated afaict.
It would become more evil if it was intentionally exploited or amplified.
Dissociation could be generalized or selective. Selective seems more problematic because it could be harder to detect.
Pretending there is common knowledge or an obvious norm around what should be private / confidential, when there is not. (There is some of this going around rationalist spaces already.) “Don’t talk about X behind their back, that’s inappropriate.” or “That’s their private business, stay out of it.” <-- Said in situations where it’s not actually inappropriate or when claims of it being someone’s ‘private business’ is overreaching.
Deliberately introducing and enforcing a norm of privacy or confidentiality that breaks certain normal and healthy social accountability structures. (Compassionate gossip is healthy in groups, especially those living in residential community,. Rationalists seem to not get this though and tend to break Chesterton’s fence on this, but I attribute this to hubris. It seems worse to me if these norms are introduced out of self-serving fear.)
Sexual harassment, molestation, or assault. (This tends to result in silencing pretty effectively.)
Creating internal jockeying, using an artificial scarcity around status or other resources. A culture of oneupmanship. A culture of having to play ‘loyal’. People getting way too sucked into this game and having their motives hijacked. They internally align themselves with the interests of certain leaders or the group, leading to secrecy being part of their internal motivation system.
This one is really speculative, but if I imagine buying into the story that Geoff is like, a superintelligence basically, and can somehow predict my own thoughts and moves before I can, then … maybe I get paranoid about even having thoughts that go against (my projection of) his goals.
Basically, if I thought someone could legit read my mind and they were not-compassionate or if I thought that they could strategically outmaneuver me at every turn due to their overwhelming advantage, that might cause some fucked up stuff in my head that stays in there for a while.
“You can’t rely on your perspective / Everything is up for grabs.” All of your mental content—ideas, concepts, motions, etc.--are potentially good (and should be leaned more heavily on, overriding others) / bad (and should be ignored / downvoted / routed around / destroyed / pushed against), and more openness to change is better, and there’s no solid place from which you can stand and see things. Of course, this is in many ways true and useful; but leaning into this creates much more room for others to selectively up/downvote stuff in you to avoid you reaching conclusions they don’t want you to reach; or more likely, up/downvote conclusions, and have you rearrange yourself to harmonize with those judgements.
Trolling Hope placed in the project / leadership. Like: I care deeply that things go well in the world; the only way I concretely see that might happen, is through this project; so if this project is doomed, then there’s no Hope; so I may as well bet everything on worlds where the project isn’t doomed; so worlds where the project is doomed are irrelevant; so I don’t see / consider / admit X if X implies that the project is doomed, since X is entirely about irrelevant worlds.
Emotional reward conditioning. (This one is simple or obvious, but I think it’s probably actually a significant portion of many of these sorts of situations.) When you start to say information I don’t like, I’m angry at you, annoyed, frustrated, dismissive, scornful, derisive, insulting, blank-faced, uninterested, condescending, disgusted, creeped out, pained, hurt, etc. When you start to hide information I don’t like, or expound the opposite, I’m pleasant, endeared, happy, admiring, excited, etc. etc. Conditioning shades into + overlaps other tactics like stonewalling (blank-faced, aiming at learned helplessness), shaming, and running intereference (changing the subject), but conditioning has a particular systematic effect of making you “walk on eggshells” about certain things and feeling relief / safety when you stick to appropriate narratives. And this systematic effect can be very strong and persist even when you’re away from the people who put it there, if you didn’t perfectly compartmentalize how-to-please-them from everything else in your mind.
Since it’s mostly just pointers to stuff I’ve already said/implied… I’ll throw out a quick comment.
I would like it if somebody started something like a carefully-moderated private Facebook group, mostly of core people who were there, to come to grips with their experiences? I think this could be good.
I am slightly concerned that people who are still in the grips of “Leverage PR campaigning” tendencies, will start trying to take it over or otherwise poison the well? (Edit: Or conversely, that people who still feel really hurt or confused about it might lash out more than I’d wish. I personally, am more worried about the former.) I still think it might be good, overall.
Be sure to be clear EARLY about who you are inviting, and who you are excluding! It changes what people are willing to talk about.
...I am not personally the right person to do this, though.
(It is too easy to “other” me, if that makes sense.)
I feel like one of the only things the public LW thread could do here?
Is ensuring public awareness of some of the unreasonably-strong reality/truth-suppressive pressures that were at play here, that there were some ways in which secrecy agreements were leveraged pretty badly to avoid accountability for harms, and showing a public ramp-down of opportunities to do so in the future.
Along with doing what we can, to signal that we generally stand against people over-simplistically demonizing the people and organizations involved in this.
Hmm. This seems worth highlighting.
The NDAs (plus pressure to sign) point to this.
…
( The rest of this might be triggering to anyone who’s been through gaslighting / culty experiences. Blunt descriptions of certain forms of control and subjugation. )
...
The rest of the truth-suppressive measures I can only speculate. Here’s a list of possible speculative mechanisms that come to mind, some of which were corroborated by Zoe’s report but not all:
Group hazing or activities that cause collective shame, making certain things hard to admit to oneself and others (plus, inserting a bucket error where ‘shameful activity’ is bucketed with ‘the whole project’ or something)
This could include implanting group delusions that are shameful to admit.
Threats to one’s physical person or loved ones for revealing things
Threats to one’s reputation or ability to acquire resources for revealing things
Deprivation used to negatively / positively reinforce certain behaviors or stories (“well, if you keep talking like that, we’re gonna have to take your phone / food / place to sleep”)
Gaslighting specific individuals or subgroups (“what you’re experiencing is in your own head; look at other people, they are doing fine, stop being crazy / stop ruining the vibe / stop blocking the project”)
A lot of things could fit into this category.
Causing dissociation. (Thus disconnecting a person from their true yes/no or making it harder for them to discern truth from fiction.) This is very common among modern humans, though, and doesn’t seem as evil-sounding as the other examples. Modern humans are already very dissociated afaict.
It would become more evil if it was intentionally exploited or amplified.
Dissociation could be generalized or selective. Selective seems more problematic because it could be harder to detect.
Pretending there is common knowledge or an obvious norm around what should be private / confidential, when there is not. (There is some of this going around rationalist spaces already.) “Don’t talk about X behind their back, that’s inappropriate.” or “That’s their private business, stay out of it.” <-- Said in situations where it’s not actually inappropriate or when claims of it being someone’s ‘private business’ is overreaching.
Deliberately introducing and enforcing a norm of privacy or confidentiality that breaks certain normal and healthy social accountability structures. (Compassionate gossip is healthy in groups, especially those living in residential community,. Rationalists seem to not get this though and tend to break Chesterton’s fence on this, but I attribute this to hubris. It seems worse to me if these norms are introduced out of self-serving fear.)
Sexual harassment, molestation, or assault. (This tends to result in silencing pretty effectively.)
Creating internal jockeying, using an artificial scarcity around status or other resources. A culture of oneupmanship. A culture of having to play ‘loyal’. People getting way too sucked into this game and having their motives hijacked. They internally align themselves with the interests of certain leaders or the group, leading to secrecy being part of their internal motivation system.
This one is really speculative, but if I imagine buying into the story that Geoff is like, a superintelligence basically, and can somehow predict my own thoughts and moves before I can, then … maybe I get paranoid about even having thoughts that go against (my projection of) his goals.
Basically, if I thought someone could legit read my mind and they were not-compassionate or if I thought that they could strategically outmaneuver me at every turn due to their overwhelming advantage, that might cause some fucked up stuff in my head that stays in there for a while.
If this resonates with you, I am very sorry.
I welcome additions to this list.
“You can’t rely on your perspective / Everything is up for grabs.” All of your mental content—ideas, concepts, motions, etc.--are potentially good (and should be leaned more heavily on, overriding others) / bad (and should be ignored / downvoted / routed around / destroyed / pushed against), and more openness to change is better, and there’s no solid place from which you can stand and see things. Of course, this is in many ways true and useful; but leaning into this creates much more room for others to selectively up/downvote stuff in you to avoid you reaching conclusions they don’t want you to reach; or more likely, up/downvote conclusions, and have you rearrange yourself to harmonize with those judgements.
Trolling Hope placed in the project / leadership. Like: I care deeply that things go well in the world; the only way I concretely see that might happen, is through this project; so if this project is doomed, then there’s no Hope; so I may as well bet everything on worlds where the project isn’t doomed; so worlds where the project is doomed are irrelevant; so I don’t see / consider / admit X if X implies that the project is doomed, since X is entirely about irrelevant worlds.
Emotional reward conditioning. (This one is simple or obvious, but I think it’s probably actually a significant portion of many of these sorts of situations.) When you start to say information I don’t like, I’m angry at you, annoyed, frustrated, dismissive, scornful, derisive, insulting, blank-faced, uninterested, condescending, disgusted, creeped out, pained, hurt, etc. When you start to hide information I don’t like, or expound the opposite, I’m pleasant, endeared, happy, admiring, excited, etc. etc. Conditioning shades into + overlaps other tactics like stonewalling (blank-faced, aiming at learned helplessness), shaming, and running intereference (changing the subject), but conditioning has a particular systematic effect of making you “walk on eggshells” about certain things and feeling relief / safety when you stick to appropriate narratives. And this systematic effect can be very strong and persist even when you’re away from the people who put it there, if you didn’t perfectly compartmentalize how-to-please-them from everything else in your mind.