This reads like you feel compelled to avoid parsing the content of the OP, and instead intend to treat the criticisms it makes as a Lovecraftian horror the mind mustn’t engage with. Attempts to interpret this sort of illegible intent-to-reject as though it were well-intentioned criticism end up looking like:
I believed that I was intrinsically evil, had destroyed significant parts of the world with my demonic powers, and was in a hell of my own creation. I was catatonic for multiple days, afraid that by moving I would cause harm to those around me.
Very helpful to have a crisp example of this in text.
ETA: I blanked out the first few times I read Jessica’s post on anti-normativity, but interpreted that accurately as my own intent to reject the information rather projecting my rejection onto the post itself, treated that as a serious problem I wanted to address, and was able to parse it after several more attempts.
I understood the first sentence of your comment to be something like “one of my hypotheses about Logan’s reaction is that Logan has some internal mental pressure to not-parse or not-understand the content of what Jessica is trying to convey.”
That makes sense to me as a hypothesis, if I’ve understood you, though I’d be curious for some guesses as to why someone might have such an internal mental pressure, and what it would be trying to accomplish or protect.
I didn’t follow the rest of the comment, mostly due to various words like “this” and “it” having ambiguous referents. Would you be willing to try everything after “attempts” again, using 3x as many words?
Logan reports a refusal to parse the content of the OP. Logan locates a problem nonspecifically in the OP, not in Logan’s specific reaction to it. This implies a belief that it would be bad to receive information from Jessica.
Logan reports a refusal to parse the content of the OP
But then, “the people most mentally concerned” happens, and I’m like, Which people were most mentally concerned? What does it mean to be mentally concerned? How could the author tell that those people were mentally concerned? Then we have “with strange social metaphysics”, and I want to know “what is social metaphysics?”, “what is it for social metaphysics to be strange or not strange?” and “what is it to be mentally concerned with strange social metaphysics”? Next is “were marginalized”. How were they marginalize? What caused the author to believe that they were marginalized? What is it for someone to be marginalized?
Most of this isn’t even slightly ambiguous, and Jessica explains most of the things being asked about, with examples, in the body of the post.
Logan locates a nonspecific problem in the OP, not in Logan’s response to it.
I just, also have this feeling like something… isn’t just wrong here, but is going wrong, and maybe the going has momentum, and I wonder how many readers will get temporarily trapped in the upside down mirror maze while thinking they’re eating potatoes, unless they slow way way down and help me figure out what on earth is happening in this post.
This isn’t a description of a specific criticism or disagreement. This is a claim that the post is nonspecifically going to cause readers to become disoriented and trapped.
This implies a belief that it would be bad to receive information from Jessica.
If the objection isn’t that Jessica is mistaken but that she’s “going wrong,” that implies that the contents of Jessica’s mind are dangerous to interact with. This is the basic trope of Lovecraftian horror—that there are some real things the human mind can’t handle and therefore wants to avoid knowing. If something is dangerous, like nuclear waste or lions, we might want to contain it or otherwise keep it at a distance.
Since there’s no mechanism suggested, this looks like an essentializing claim. If the problem isn’t something specific that Jessica is doing or some specific transgression she’s committing, then maybe that means Jessica’s just intrinsically dangerous. Even if not, if Jessica were going to take this concern seriously, without a theory of how what she’s doing is harmful, she would have to treat all of her intentions as dangerous and self-contain.
In other words, she’d have to proceed as though she might be intrinsically evil (“isn’t just wrong here, but is going wrong, and maybe the going has momentum”), is in a hell of her own creation (“I felt like I was being invited into a mirror-maze that the author had been trapped in for… an unknown but very long amount of time.”), and ought to avoid taking actions, i.e. become catatonic.
I also don’t know what “social metaphysics” means.
I get the mood of the story. If you look at specific accusations, here is what I found, maybe I overlooked something:
there were at least 3 other cases of psychiatric institutionalizations by people in the social circle immediate to MIRI/CFAR; at least one other than me had worked at MIRI for a significant time, and at least one had done work with MIRI on a shorter-term basis. There was, in addition, a case of someone becoming very paranoid, attacking a mental health worker, and hijacking her car, leading to jail time; this person was not an employee of either organization, but had attended multiple CFAR events including a relatively exclusive AI-focused one.
There are even cases of suicide in the Berkeley rationality community [...] associated with a subgroup splitting off of the CFAR-centric rationality community due to its perceived corruption
a prominent researcher was going around convincing people that human-level AGI was coming in 5-15 years.
MIRI became very secretive about research. Many researchers were working on secret projects, and I learned almost nothing about these. I and other researchers were told not to even ask each other about what others of us were working on, on the basis that if someone were working on a secret project, they may have to reveal this fact. Instead, we were supposed to discuss our projects with an executive, who could connect people working on similar projects.
Someone in the community told me that for me to think AGI probably won’t be developed soon, I must think I’m better at meta-rationality than Eliezer Yudkowsky, a massive claim of my own specialness
Years before, MIRI had a non-disclosure agreement that members were pressured to sign, as part of a legal dispute with Louie Helm.
Anna Salamon said that Michael was causing someone else at MIRI to “downvote Eliezer in his head” and that this was bad because it meant that the “community” would not agree about who the leaders were, and would therefore have akrasia issues due to the lack of agreement on a single leader in their head telling them what to do.
MIRI had a “world-saving plan”. [...] Nate Soares frequently talked about how it was necessary to have a “plan” to make the entire future ok, to avert AI risk; this plan would need to “backchain” from a state of no AI risk and may, for example, say that we must create a human emulation using nanotechnology that is designed by a “genie” AI, which does a narrow task rather than taking responsibility for the entire future; this would allow the entire world to be taken over by a small group including the emulated human.
Our task was to create an integrated, formal theory of values, decisions, epistemology, self-improvement, etc (“Friendliness theory”), which would help us develop Friendly AI faster than the rest of the world combined was developing AGI (which was, according to leaders, probably in less than 20 years). It was said that a large part of our advantage in doing this research so fast was that we were “actually trying” and others weren’t. It was stated by multiple people that we wouldn’t really have had a chance to save the world without Eliezer Yudkowsky.
I heard that “political” discussions at CFAR (e.g. determining how to resolve conflicts between people at the organization, which could result in people leaving the organization) were mixed with “debugging” conversations, in a way that would make it hard for people to focus primarily on the debugged person’s mental progress without imposing pre-determined conclusions. Unfortunately, when there are few people with high psychological aptitude around, it’s hard to avoid “debugging” conversations having political power dynamics, although it’s likely that the problem could have been mitigated.
I recall talking to a former CFAR employee who was scapegoated and ousted after failing to appeal to the winning internal coalition; he was obviously quite paranoid and distrustful, and another friend and I agreed that he showed PTSD symptoms.
This is like 5-10% of the text. A curious thing is that it is actually the remaining 90-95% of the text that evoke bad feelings in the reader; at least in my case.
To compare, when I was reading Zoe’s article, I was shocked by the described facts. When I was reading Jessica’s article, I was shocked by the horrible things that happened to her, but the facts felt… most of them boring… the most worrying part was about a group of people who decided that CFAR was evil, spent some time blogging against CFAR, then some of them killed themselves; which is very sad, but I fail to see how exactly CFAR is responsible for this, when it seems like the anti-CFAR group actually escalated the underlying problems to the point of suicide. (This reminds me of XiXiDu describing how fighting against MIRI causes him health problems; I feel bad about him having the problems, but I am not sure what MIRI could possibly do to stop this.)
Jessica’s narrative is that MIRI/CFAR is just like Leverage, except less transparent. Yet when she mentions specific details, it often goes somewhat like this: “Zoe mentioned that Leverage did X. CFAR does not do X, but I feel terrible anyway, so it is similar. Here is something vaguely analogical.” Like, how can you conclude that not doing something bad is even worse than doing it, because it is less transparent?! Of course it is less transparent if it, you know, actually does not exist.
Or maybe I’m tired and failing at reading comprehension. I wish someone would rewrite the article, to focus on the specific accusations against MIRI/CFAR, and remove all those analogies-except-not-really with Zoe; just make it a standalone list of specific accusations. Then let’s discuss that.
This reads like you feel compelled to avoid parsing the content of the OP, and instead intend to treat the criticisms it makes as a Lovecraftian horror the mind mustn’t engage with. Attempts to interpret this sort of illegible intent-to-reject as though it were well-intentioned criticism end up looking like:
Very helpful to have a crisp example of this in text.
ETA: I blanked out the first few times I read Jessica’s post on anti-normativity, but interpreted that accurately as my own intent to reject the information rather projecting my rejection onto the post itself, treated that as a serious problem I wanted to address, and was able to parse it after several more attempts.
I understood the first sentence of your comment to be something like “one of my hypotheses about Logan’s reaction is that Logan has some internal mental pressure to not-parse or not-understand the content of what Jessica is trying to convey.”
That makes sense to me as a hypothesis, if I’ve understood you, though I’d be curious for some guesses as to why someone might have such an internal mental pressure, and what it would be trying to accomplish or protect.
I didn’t follow the rest of the comment, mostly due to various words like “this” and “it” having ambiguous referents. Would you be willing to try everything after “attempts” again, using 3x as many words?
Summary:
Logan reports a refusal to parse the content of the OP. Logan locates a problem nonspecifically in the OP, not in Logan’s specific reaction to it. This implies a belief that it would be bad to receive information from Jessica.
Logan reports a refusal to parse the content of the OP
Most of this isn’t even slightly ambiguous, and Jessica explains most of the things being asked about, with examples, in the body of the post.
Logan locates a nonspecific problem in the OP, not in Logan’s response to it.
This isn’t a description of a specific criticism or disagreement. This is a claim that the post is nonspecifically going to cause readers to become disoriented and trapped.
This implies a belief that it would be bad to receive information from Jessica.
If the objection isn’t that Jessica is mistaken but that she’s “going wrong,” that implies that the contents of Jessica’s mind are dangerous to interact with. This is the basic trope of Lovecraftian horror—that there are some real things the human mind can’t handle and therefore wants to avoid knowing. If something is dangerous, like nuclear waste or lions, we might want to contain it or otherwise keep it at a distance.
Since there’s no mechanism suggested, this looks like an essentializing claim. If the problem isn’t something specific that Jessica is doing or some specific transgression she’s committing, then maybe that means Jessica’s just intrinsically dangerous. Even if not, if Jessica were going to take this concern seriously, without a theory of how what she’s doing is harmful, she would have to treat all of her intentions as dangerous and self-contain.
In other words, she’d have to proceed as though she might be intrinsically evil (“isn’t just wrong here, but is going wrong, and maybe the going has momentum”), is in a hell of her own creation (“I felt like I was being invited into a mirror-maze that the author had been trapped in for… an unknown but very long amount of time.”), and ought to avoid taking actions, i.e. become catatonic.
I also don’t know what “social metaphysics” means.
I get the mood of the story. If you look at specific accusations, here is what I found, maybe I overlooked something:
This is like 5-10% of the text. A curious thing is that it is actually the remaining 90-95% of the text that evoke bad feelings in the reader; at least in my case.
To compare, when I was reading Zoe’s article, I was shocked by the described facts. When I was reading Jessica’s article, I was shocked by the horrible things that happened to her, but the facts felt… most of them boring… the most worrying part was about a group of people who decided that CFAR was evil, spent some time blogging against CFAR, then some of them killed themselves; which is very sad, but I fail to see how exactly CFAR is responsible for this, when it seems like the anti-CFAR group actually escalated the underlying problems to the point of suicide. (This reminds me of XiXiDu describing how fighting against MIRI causes him health problems; I feel bad about him having the problems, but I am not sure what MIRI could possibly do to stop this.)
Jessica’s narrative is that MIRI/CFAR is just like Leverage, except less transparent. Yet when she mentions specific details, it often goes somewhat like this: “Zoe mentioned that Leverage did X. CFAR does not do X, but I feel terrible anyway, so it is similar. Here is something vaguely analogical.” Like, how can you conclude that not doing something bad is even worse than doing it, because it is less transparent?! Of course it is less transparent if it, you know, actually does not exist.
Or maybe I’m tired and failing at reading comprehension. I wish someone would rewrite the article, to focus on the specific accusations against MIRI/CFAR, and remove all those analogies-except-not-really with Zoe; just make it a standalone list of specific accusations. Then let’s discuss that.
This comment was very helpful. Thank you.
Thanks for the expansion! Mulling.