The main thing I’d fight if I felt fighty right now is the claim that by not listening to talk about demons and auras MIRI (or by extension me, who endorsed MIRI’s decision) is impinging on her free speech.
You wrote that talking about auras and demons the way Jessica did while at MIRI should be considered a psychiatric emergency. When done by a practicing psychiatrist this is an impingement on Jessica’s free speech. You wrote this in response to a post that contained the following and only the following mentions of demons or auras:
During this time, I was intensely scrupulous; I believed that I was intrinsically evil, had destroyed significant parts of the world with my demonic powers, and was in a hell of my own creation. [after Jessica had left MIRI]
I heard that the paranoid person in question was concerned about a demon inside him, implanted by another person, trying to escape. [description of what someone else said]
The weirdest part of the events recounted is the concern about possibly-demonic mental subprocesses being implanted by other people. [description of Zoe’s post]
As weird as the situation got, with people being afraid of demonic subprocesses being implanted by other people, there were also psychotic breaks involving demonic subprocess narratives around MIRI and CFAR. [description of what other people said, and possibly an allusion to the facts described in the first quote, after she had left MIRI]
While in Leverage the possibility of subtle psychological influence between people was discussed relatively openly, around MIRI/CFAR it was discussed covertly, with people being told they were crazy for believing it might be possible. (I noted at the time that there might be a sense in which different people have “auras” in a way that is not less inherently rigorous than the way in which different people have “charisma”, and I feared this type of comment would cause people to say I was crazy.)
Only the last one is a description of a thing Jessica herself said while working at MIRI. Like Jessica when she worked at MIRI, I too believe that people experiencing psychotic breaks sometimes talk about demons. Like Jessica when she worked at MIRI, I too believe that auras are not obviously less real than charisma. Am I experiencing a psychiatric emergency?
You wrote that talking about auras and demons the way Jessica did while at MIRI should be considered a psychiatric emergency. When done by a practicing psychiatrist this is an impingement on Jessica’s free speech.
I don’t think I said any talk of auras should be a psychiatric emergency, otherwise we’d have to commit half of Berkeley. I said that “in the context of her being borderline psychotic” ie including this symptom, they should have “[told] her to seek normal medical treatment”. Suggesting that someone seek normal medical treatment is pretty different from saying this is a psychiatric emergency, and hardly an “impingement” on free speech. I’m kind of playing this in easy mode here because in hindsight we know Jessica ended up needing treatment, I feel like this makes it pretty hard to make it sound sinister when I suggest this.
You wrote this in response to a post that contained the following and only the following mentions of demons or auras:
“During this time, I was intensely scrupulous; I believed that I was intrinsically evil, had destroyed significant parts of the world with my demonic powers, and was in a hell of my own creation...” [followed by several more things along these lines]
Yes? That actually sounds pretty bad to me. If I ever go around saying that I have destroyed significant parts of the world with my demonic powers, you have my permission to ask me if maybe I should seek psychiatric treatment. If you say “Oh yes, Scott, that’s a completely normal and correct thing to think, I am validating you and hope you go deeper into that”, then once I get better I’ll accuse you of being a bad friend. Jessica’s doing the opposite and accusing MIRI of being a bad workplace for not validating and reinforcing her in this!
I think what we all later learned about Leverage confirms all this. Leverage did the thing Jessica wanted MIRI to do told everyone ex cathedra that demons were real and they were right to be afraid of them, and so they got an epidemic of mass hysteria that sounds straight out of a medieval nunnery. People were getting all sorts of weird psychosomatic symptoms, and one of the commenters said their group house exploded when one member accused another member of being possessed by demons, refused to talk or communicate with them in case the demons spread, and the “possessed” had to move out. People felt traumatized, relationships were destroyed, it sounded awful.
MIRI is under no obligation to validate and signal-boost tolerate individual employees’ belief in demons, including some sort of metaphorical demons. In fact, I think they’re under a mild obligation not to, as part of their role as ~leader-ish in a rationalist community. They’re under an obligation to model good epistemics for the rest of us and avoid more Leverage-type mass hysterias.
Surinder Sharma, an Indian mystic, claimed to be able to kill people with a voodoo curse. He was pretty convincing and lots of people were legitimately scared. Sanal Edamaruku, president of the Indian Rationalist Organization, challenged Sharma to kill him. Since this is the 21st century and capitalism is amazing, they decided to do the whole death curse on live TV. Sharma sprinkled water and chanted magic words around Edamaruku. According to Wikipedia, “the challenge ended after several hours, with Edamaruku surviving unharmed”.
If Leverage had a few more Sanal Edamarukus, a lot of people would have avoided a pretty weird time.
I think the best response MIRI could have had to all this would have been for Nate Soares to challenge Geoff Anders to infect him with a demon on life TV, then walk out unharmed and laugh. I think the second-best was the one they actually did.
EDIT: I think I misunderstood parts of this, see below comments.
I said that “in the context of her being borderline psychotic” ie including this symptom, they should have “[told] her to seek normal medical treatment”. Suggesting that someone seek normal medical treatment is pretty different from saying this is a psychiatric emergency, and hardly an “impingement” on free speech.
It seems like you’re trying to walk back your previous claim, which did use the “psychiatric emergency” term:
Jessica is accusing MIRI of being insufficiently supportive to her by not taking her talk about demons and auras seriously when she was borderline psychotic, and comparing this to Leverage, who she thinks did a better job by promoting an environment where people accepted these ideas. I think MIRI was correct to be concerned and (reading between the lines) telling her to seek normal medical treatment, instead of telling her that demons were real and she was right to worry about them, and I think her disagreement with this is coming from a belief that psychosis is potentially a form of useful creative learning. While I don’t want to assert that I am 100% sure this can never be true, I think it’s true rarely enough, and with enough downside risk, that treating it as a psychiatric emergency is warranted.
Reading again, maybe by “it” in the last sentence you meant “psychosis” not “talking about auras and demons”? Even if that’s what you meant I hope you can see why I interpreted it the way I did?
(Note, I do not think I would have been diagnosed with psychosis if I had talked to a psychiatrist during the time I was still at MIRI, although it’s hard to be certain and it’s hard to prove anyway.)
Yes? That actually sounds pretty bad to me. If I ever go around saying that I have destroyed significant parts of the world with my demonic powers, you have my permission to ask me if maybe I should seek psychiatric treatment.
This is while I was already in the middle of a psychotic break and in a hospital. Obviously we would agree that I needed psychiatric treatment at this point.
MIRI is under no obligation to validate and signal-boost individual employees’ belief in demons, including some sort of metaphorical demons.
“Validating and signal boosting” is not at all what I would want! I would want rational discussion and evaluation. The example you give at the end of challenging Geoff Anders on TV would be an example of rational evaluation.
(I definitely don’t think Leverage handled this optimally, and that the sort of test you describe would have been good for them to do more of; I’m pointing to their lower rate of psychiatric incarceration as a point in favor of what they did, relatively speaking.)
What would a rational discussion of the claim Ben and I agree on (“auras are not obviously less real than charisma”) look like? One thing to do would be to see how much inter-rater agreement there is among aura-readers and charisma-readers, respectively, to see whether there is any perceivable feature being described at all. Another would be to see how predictive each rating is of other measurable phenomena (e.g. maybe “aura theory” predicts that people with “small auras” will allow themselves to be talked over by people with “big auras” more of the time; maybe “charisma theory” predicts people smile more when a “charismatic” person talks). Testing this might be hard but it doesn’t seem impossible.
(P.S. It seems like the AI box experiment (itself similar to the more standard Milgram Experiment) is a test of mind control ability, which in some cases comes out positive, like the Milgram Expeiment; this goes to show that the depending on the setup of the Anders/Soares demon test, it might not have a completely obvious result.)
Sorry, yes, I meant the psychosis was emergency. Non-psychotic discussion of auras/demons isn’t.
I’m kind of unclear what we’re debating now.
I interpret us as both agreeing that there are people talking about auras and demons who are not having psychiatric emergencies (eg random hippies, Catholic exorcists), and they should not be bothered, except insofar as you feel like having rational arguments about it.
I interpret us as both agreeing that you were having a psychotic episode, that you were going further / sounded less coherent than the hippies and Catholics, and that some hypothetical good diagnostician / good friend should have noticed that and suggested you seek help.
Am I right that we agree on those two points? Can you clarify what you think our crux is?
Verbal coherence level seems like a weird place to locate the disagreement—Jessica maintained approximate verbal coherence (though with increasing difficulty) through most of her episode. I’d say even in October 2017, she was more verbally coherent than e.g. the average hippie or Catholic, because she was trying at all.
The most striking feature was actually her ability to take care of herself rapidly degrading, as evidenced by e.g. getting lost almost immediately after leaving her home, wandering for several miles, then calling me for help and having difficulty figuring out where she was—IIRC, took a few minutes to find cross streets. When I found her she was shuffling around in a daze, her skin looked like she’d been scratching it much more than usual, clothes were awkwardly hung on her body, etc. This was on either the second or third day, and things got almost monotonically worse as the days progressed.
The obvious cause for concern was “rapid descent in presentation from normal adult to homeless junkie”. Before that happened, it was not at all obvious this was an emergency. Who hasn’t been kept up all night by anxiety after a particularly stressful day in a stressful year?
I think the focus on verbal coherence is politically convenient for both of you. It makes this case into an interesting battleground for competing ideologies, where they can both try to create blame for a bad thing.
Scott wants to do this because AFAICT his agenda is to marginalize discussion of concepts from woo / psychedelia / etc, and would like to claim that Jess’ interest in those was a clear emergency. Jess wants to do this because she would like to claim that the ideas at MIRI directly drove her crazy.
I worked there too, and left at the same time for approximately the same reasons. We talked about it extensively at the time. It’s not plausible that it was even in-frame that considering details of S-risks in the vein of Unsong’s Broadcast would possibly be helpful for alignment research. Basilisk-baiting like that would generally have been frowned upon, but mostly just wouldn’t have come up.
The obvious sources of madness here were
The extreme burden of responsibility for the far future (combined with the position that MIRI was uniquely essential to this), and encouragement to take this responsibility seriously, is obviously stressful.
The local political environment at the time was a mess—splinters were forming, paranoia was widespread. A bunch of people we respected and worked with had decided the world was going to end, very soon, uncomfortably soon, and they were making it extremely difficult for us to check their work. This uncertainty was, uh, stressful.
Psychedelics very obviously induces states closer-than-usual to psychosis. This is what’s great about them—they let you dip a toe into the psychotic world and be back the next day, so you can take some of the insights with you. Also, this makes them a risk for inducing psychotic episodes. It’s not a coincidence that every episode I remember Jess having in 2017 and 2018 was a direct result of a trip-gone-long.
Latent tendency towards psychosis
Critically, I don’t think any of these factors would have been sufficient on their own. The direct content of MIRI’s research, and the woo stuff, both seem like total red herrings in comparison to any of these 4 issues.
I want to specifically highlight “A bunch of people we respected and worked with had decided the world was going to end, very soon, uncomfortably soon, and they were making it extremely difficult for us to check their work.” I noticed this second-hand at the time, but didn’t see any paths toward making things better. I think it had a really harmful effects on the community, and is worth thinking a lot about before something similar happens again.
Thanks for giving your own model and description of the situation!
Regarding latent tendency, I don’t have a family history of psychosis (but I do of bipolar), although that doesn’t rule out latent tendency. It’s unclear what “latent tendency” means exactly, it’s kind of pretending that the real world is a 3-node Bayesian network (self tendency towards X, environment tendency towards inducing X, whether X actually happens) rather than a giant web of causality, but maybe there’s some way to specify it more precisely.
I think the 4 factors you listed are the vast majority, so I partially agree with your “red herring” claim.
The “woo” language was causal, I think, mostly because I feared that others would apply to coercion to me if I used it too much (even if I had a more detailed model that I could explain upon request), and there was a bad feedback loop around thinking that I was crazy and/or other people would think I was crazy, and other people playing into this.
I think I originally wrote about basilisk type things in the post because I was very clearly freaking out about abstract evil at the time of psychosis (basically a generalization of utility function sign flips), and I thought Scott’s original comment would have led people to think I was thinking about evil mainly because of Michael, when actually I was thinking about evil for a variety of reasons. I was originally going to say “maybe all this modeling of adversarial/evil scenarios at my workplace contributed, but I’m not sure” but an early reader said “actually wait, based on what you’ve said what you experienced later was a natural continuation of the previous stuff, you’re very much understating things” and suggested (an early version of) the last paragraph of the basilisk section, and that seemed likely enough to include.
It’s pretty clear that thinking about basilisk-y scenarios in the abstract was part of MIRI’s agenda (e.g. the Arbital article). Here’s a comment by Rob Bensinger saying it’s probably bad to try to make an AI that does a lot of interesting stuff and has a good time doing it, because that objective is too related to consicousness and that might create a lot of suffering. (That statement references the “s-risk” concept and if someone doesn’t know what that is and tries to find out, they could easily end up at a Brian Tomasik article recommending thinking about what it’s like to be dropped in lava.)
The thing is it seems pretty hard to evaluate an abstract claim like Rob’s without thinking about details. I get that there are arguments against thinking about the details (e.g. it might drive you crazy or make you more extortable) but natural ways of thinking about the abstract question (e.g. imagination / pattern completion / concretization / etc) would involve thinking about details even if people at MIRI would in fact dis-endorse thinking about the details. It would require a lot of compartmentalization to think about this question in the abstract without thinking about the details, and some people are more disposed to do that than others, and I expect compartmentalization of that sort to cause worse FAI research, e.g. because it might lead to treating “human values” as a LISP token.
[EDIT: Just realized Buck Shlegeris (someone who recently left MIRI) recently wrote a post called “Worst-case thinking in AI alignment”… seems concordant with the point I’m making.]
hmm… this could have come down to spending time in different parts of MIRI? I mostly worked on the “world’s last decent logic department” stuff—maybe the more “global strategic” aspects of MIRI work, at least the parts behind closed doors I wasn’t allowed through, were more toxic? Still feels kinda unlikely but I’m missing info there so it’s just a hunch.
My guess is that it has more to do with willingness to compartmentalize than part of MIRI per se. Compartmentalization is negatively correlated with “taking on responsibility” for more of the problem. I’m sure you can see why it would be appealing to avoid giving into extortion in real life, not just on whiteboards, and attempting that with a skewed model of the situation can lead to outlandish behavior like Ziz resisting arrest as hard as possible.
I think this is a persistent difference between us but isn’t especially relevant to the difference in outcomes here.
I’d more guess that the reason you had psychoses and I didn’t had to do with you having anxieties about being irredeemably bad that I basically didn’t at the time. Seems like this would be correlated with your feeling like you grew up in a Shin Sekai Yori world?
I clearly had more scrupulosity issues than you and that contributed a lot. Relevantly, the original Roko’s Basilisk post is putting AI sci-fi detail on a fear I am pretty sure a lot of EAs feel/felt in their heart, that something nonspecifically bad will happen to them because they are able to help a lot of people (due to being pivotal on the future), and know this, and don’t do nearly as much as they could. If you’re already having these sorts of fears then the abstract math of extortion and so on can look really threatening.
When I got back into town and talked with Jessica, she was talking about how it might be wrong to take actions that might possibly harm others, i.e. pretty much any actions, since she might not learn fast enough for this to come out net positive. Seems likely to me that the content of Jessica’s anxious perseveration was partly causally upstream of the anxious perseveration itself.
I agree that a decline in bodily organization was the main legitimate reason for concern. It seems obviously legitimate for Jessica (and me) to point out that Scott is proposing a standard that cannot feasibly be applied uniformly, since it’s not already common knowledge that Scott isn’t making sense here, and his prior comments on this subject have been heavily upvoted. The main alternative would be to mostly stop engaging on LessWrong, which I have done.
I don’t fully understand what “latent tendency towards psychosis” means functionally or what predictions it makes, so it doesn’t seem like an adequate explanation. I do know that there’s correlation within families, but I have a family history of schizophrenia and Jessica doesn’t, so if that’s what you mean by latent tendency it doesn’t seem to obviously have an odds ratio in the correct direction within our local cluster.
By latent tendency I don’t mean family history, though it’s obviously correlated. I claim that there’s this fact of the matter about Jess’ personality, biology, etc, which is that it’s easier for her to have a psychotic episode than for most people. This seems not plausibly controversial.
I’m not claiming a gears-level model here. When you see that someone has a pattern of <problem> that others in very similar situations did not have, you should assume some of the causality is located in the person, even if you don’t know how.
Listing “I don’t know, some other reason we haven’t identified yet” as an “obvious source” can make sense as a null option, but giving it a virtus dormitiva type name is silly.
I think that Jessica has argued with some plausibility that her psychotic break was in part the result of taking aspects of the AI safety discourse more seriously and unironically than the people around her, combined with adversarial pressures and silencing. This seems like a gears-level model that might be more likely in people with a cognitive disposition correlated with psychosis.
I interpret us as both agreeing that there are people talking about auras who are not having psychiatric emergencies (eg random hippies), and they should not be bothered.
Agreed.
I interpret us as both agreeing that you were having a psychotic episode, that you were going further / sounded less coherent than the hippies, and that some hypothetical good diagnostician / good friend should have noticed that and suggested you seek help.
Agreed during October 2017. Disagreed substantially before then (January-June 2017, when I was at MIRI).
(I edited the post to make it clear how I misinterpreted your comment.)
Interpreting you as saying that January-June 2017 you were basically doing the same thing as the Leveragers when talking about demons and had no other signs of psychosis, I agree this was not a psychiatric emergency, and I’m sorry if I got confused and suggested it was. I’ve edited my post also.
One thing to add is I think in the early parts of my psychosis (before the “mind blown by Ra” part) I was as coherent or more coherent than hippies are on regular days, and even after that for some time (before actually being hospitalized) I might have been as coherent as they were on “advanced spiritual practice” days (e.g. middle of a meditation retreat or experiencing Kundalini awakening). I was still controlled pretty aggressively with the justification that I was being incoherent, and I think that control caused me to become more mentally disorganized and verbally incoherent over time. The math test example is striking, I think less than 0.2% of people could pass it (to Zack’s satisfaction) on a good day, and less than 3% could give an answer as good as the one I gave, yet this was still used to “prove” that I was unable to reason.
My recollection is that at that time you were articulately expressing what seemed like a level of scrupulosity typical of many Bay Area Rationalists. You were missing enough sleep that I was worried, but you seemed oriented x3. I don’t remember you talking about demons or auras at all, and have no recollection of you confusedly reifying agents who weren’t there.
You wrote that talking about auras and demons the way Jessica did while at MIRI should be considered a psychiatric emergency. When done by a practicing psychiatrist this is an impingement on Jessica’s free speech. You wrote this in response to a post that contained the following and only the following mentions of demons or auras:
During this time, I was intensely scrupulous; I believed that I was intrinsically evil, had destroyed significant parts of the world with my demonic powers, and was in a hell of my own creation. [after Jessica had left MIRI]
I heard that the paranoid person in question was concerned about a demon inside him, implanted by another person, trying to escape. [description of what someone else said]
The weirdest part of the events recounted is the concern about possibly-demonic mental subprocesses being implanted by other people. [description of Zoe’s post]
As weird as the situation got, with people being afraid of demonic subprocesses being implanted by other people, there were also psychotic breaks involving demonic subprocess narratives around MIRI and CFAR. [description of what other people said, and possibly an allusion to the facts described in the first quote, after she had left MIRI]
While in Leverage the possibility of subtle psychological influence between people was discussed relatively openly, around MIRI/CFAR it was discussed covertly, with people being told they were crazy for believing it might be possible. (I noted at the time that there might be a sense in which different people have “auras” in a way that is not less inherently rigorous than the way in which different people have “charisma”, and I feared this type of comment would cause people to say I was crazy.)
Only the last one is a description of a thing Jessica herself said while working at MIRI. Like Jessica when she worked at MIRI, I too believe that people experiencing psychotic breaks sometimes talk about demons. Like Jessica when she worked at MIRI, I too believe that auras are not obviously less real than charisma. Am I experiencing a psychiatric emergency?
I don’t think I said any talk of auras should be a psychiatric emergency, otherwise we’d have to commit half of Berkeley. I said that “in the context of her being borderline psychotic” ie including this symptom, they should have “[told] her to seek normal medical treatment”. Suggesting that someone seek normal medical treatment is pretty different from saying this is a psychiatric emergency, and hardly an “impingement” on free speech. I’m kind of playing this in easy mode here because in hindsight we know Jessica ended up needing treatment, I feel like this makes it pretty hard to make it sound sinister when I suggest this.
Yes? That actually sounds pretty bad to me. If I ever go around saying that I have destroyed significant parts of the world with my demonic powers, you have my permission to ask me if maybe I should seek psychiatric treatment. If you say “Oh yes, Scott, that’s a completely normal and correct thing to think, I am validating you and hope you go deeper into that”, then once I get better I’ll accuse you of being a bad friend. Jessica’s doing the opposite and accusing MIRI of being a bad workplace for not validating and reinforcing her in this!
I think what we all later learned about Leverage confirms all this. Leverage
did the thing Jessica wanted MIRI to dotold everyone ex cathedra that demons were real and they were right to be afraid of them, and so they got an epidemic of mass hysteria that sounds straight out of a medieval nunnery. People were getting all sorts of weird psychosomatic symptoms, and one of the commenters said their group house exploded when one member accused another member of being possessed by demons, refused to talk or communicate with them in case the demons spread, and the “possessed” had to move out. People felt traumatized, relationships were destroyed, it sounded awful.MIRI is under no obligation to
validate and signal-boosttolerate individual employees’ belief in demons, including some sort of metaphorical demons. In fact, I think they’re under a mild obligation not to, as part of their role as ~leader-ish in a rationalist community. They’re under an obligation to model good epistemics for the rest of us and avoid more Leverage-type mass hysterias.One of my heroes is this guy:
https://www.youtube.com/watch?v=Bmo1a-bimAM
Surinder Sharma, an Indian mystic, claimed to be able to kill people with a voodoo curse. He was pretty convincing and lots of people were legitimately scared. Sanal Edamaruku, president of the Indian Rationalist Organization, challenged Sharma to kill him. Since this is the 21st century and capitalism is amazing, they decided to do the whole death curse on live TV. Sharma sprinkled water and chanted magic words around Edamaruku. According to Wikipedia, “the challenge ended after several hours, with Edamaruku surviving unharmed”.
If Leverage had a few more Sanal Edamarukus, a lot of people would have avoided a pretty weird time.
I think the best response MIRI could have had to all this would have been for Nate Soares to challenge Geoff Anders to infect him with a demon on life TV, then walk out unharmed and laugh. I think the second-best was the one they actually did.
EDIT: I think I misunderstood parts of this, see below comments.
It seems like you’re trying to walk back your previous claim, which did use the “psychiatric emergency” term:
Reading again, maybe by “it” in the last sentence you meant “psychosis” not “talking about auras and demons”? Even if that’s what you meant I hope you can see why I interpreted it the way I did?
(Note, I do not think I would have been diagnosed with psychosis if I had talked to a psychiatrist during the time I was still at MIRI, although it’s hard to be certain and it’s hard to prove anyway.)
This is while I was already in the middle of a psychotic break and in a hospital. Obviously we would agree that I needed psychiatric treatment at this point.
“Validating and signal boosting” is not at all what I would want! I would want rational discussion and evaluation. The example you give at the end of challenging Geoff Anders on TV would be an example of rational evaluation.
(I definitely don’t think Leverage handled this optimally, and that the sort of test you describe would have been good for them to do more of; I’m pointing to their lower rate of psychiatric incarceration as a point in favor of what they did, relatively speaking.)
What would a rational discussion of the claim Ben and I agree on (“auras are not obviously less real than charisma”) look like? One thing to do would be to see how much inter-rater agreement there is among aura-readers and charisma-readers, respectively, to see whether there is any perceivable feature being described at all. Another would be to see how predictive each rating is of other measurable phenomena (e.g. maybe “aura theory” predicts that people with “small auras” will allow themselves to be talked over by people with “big auras” more of the time; maybe “charisma theory” predicts people smile more when a “charismatic” person talks). Testing this might be hard but it doesn’t seem impossible.
(P.S. It seems like the AI box experiment (itself similar to the more standard Milgram Experiment) is a test of mind control ability, which in some cases comes out positive, like the Milgram Expeiment; this goes to show that the depending on the setup of the Anders/Soares demon test, it might not have a completely obvious result.)
Sorry, yes, I meant the psychosis was emergency. Non-psychotic discussion of auras/demons isn’t.
I’m kind of unclear what we’re debating now.
I interpret us as both agreeing that there are people talking about auras and demons who are not having psychiatric emergencies (eg random hippies, Catholic exorcists), and they should not be bothered, except insofar as you feel like having rational arguments about it.
I interpret us as both agreeing that you were having a psychotic episode, that you were going further / sounded less coherent than the hippies and Catholics, and that some hypothetical good diagnostician / good friend should have noticed that and suggested you seek help.
Am I right that we agree on those two points? Can you clarify what you think our crux is?
Verbal coherence level seems like a weird place to locate the disagreement—Jessica maintained approximate verbal coherence (though with increasing difficulty) through most of her episode. I’d say even in October 2017, she was more verbally coherent than e.g. the average hippie or Catholic, because she was trying at all.
The most striking feature was actually her ability to take care of herself rapidly degrading, as evidenced by e.g. getting lost almost immediately after leaving her home, wandering for several miles, then calling me for help and having difficulty figuring out where she was—IIRC, took a few minutes to find cross streets. When I found her she was shuffling around in a daze, her skin looked like she’d been scratching it much more than usual, clothes were awkwardly hung on her body, etc. This was on either the second or third day, and things got almost monotonically worse as the days progressed.
The obvious cause for concern was “rapid descent in presentation from normal adult to homeless junkie”. Before that happened, it was not at all obvious this was an emergency. Who hasn’t been kept up all night by anxiety after a particularly stressful day in a stressful year?
I think the focus on verbal coherence is politically convenient for both of you. It makes this case into an interesting battleground for competing ideologies, where they can both try to create blame for a bad thing.
Scott wants to do this because AFAICT his agenda is to marginalize discussion of concepts from woo / psychedelia / etc, and would like to claim that Jess’ interest in those was a clear emergency. Jess wants to do this because she would like to claim that the ideas at MIRI directly drove her crazy.
I worked there too, and left at the same time for approximately the same reasons. We talked about it extensively at the time. It’s not plausible that it was even in-frame that considering details of S-risks in the vein of Unsong’s Broadcast would possibly be helpful for alignment research. Basilisk-baiting like that would generally have been frowned upon, but mostly just wouldn’t have come up.
The obvious sources of madness here were
The extreme burden of responsibility for the far future (combined with the position that MIRI was uniquely essential to this), and encouragement to take this responsibility seriously, is obviously stressful.
The local political environment at the time was a mess—splinters were forming, paranoia was widespread. A bunch of people we respected and worked with had decided the world was going to end, very soon, uncomfortably soon, and they were making it extremely difficult for us to check their work. This uncertainty was, uh, stressful.
Psychedelics very obviously induces states closer-than-usual to psychosis. This is what’s great about them—they let you dip a toe into the psychotic world and be back the next day, so you can take some of the insights with you. Also, this makes them a risk for inducing psychotic episodes. It’s not a coincidence that every episode I remember Jess having in 2017 and 2018 was a direct result of a trip-gone-long.
Latent tendency towards psychosis
Critically, I don’t think any of these factors would have been sufficient on their own. The direct content of MIRI’s research, and the woo stuff, both seem like total red herrings in comparison to any of these 4 issues.
I want to specifically highlight “A bunch of people we respected and worked with had decided the world was going to end, very soon, uncomfortably soon, and they were making it extremely difficult for us to check their work.” I noticed this second-hand at the time, but didn’t see any paths toward making things better. I think it had a really harmful effects on the community, and is worth thinking a lot about before something similar happens again.
Thanks for giving your own model and description of the situation!
Regarding latent tendency, I don’t have a family history of psychosis (but I do of bipolar), although that doesn’t rule out latent tendency. It’s unclear what “latent tendency” means exactly, it’s kind of pretending that the real world is a 3-node Bayesian network (self tendency towards X, environment tendency towards inducing X, whether X actually happens) rather than a giant web of causality, but maybe there’s some way to specify it more precisely.
I think the 4 factors you listed are the vast majority, so I partially agree with your “red herring” claim.
The “woo” language was causal, I think, mostly because I feared that others would apply to coercion to me if I used it too much (even if I had a more detailed model that I could explain upon request), and there was a bad feedback loop around thinking that I was crazy and/or other people would think I was crazy, and other people playing into this.
I think I originally wrote about basilisk type things in the post because I was very clearly freaking out about abstract evil at the time of psychosis (basically a generalization of utility function sign flips), and I thought Scott’s original comment would have led people to think I was thinking about evil mainly because of Michael, when actually I was thinking about evil for a variety of reasons. I was originally going to say “maybe all this modeling of adversarial/evil scenarios at my workplace contributed, but I’m not sure” but an early reader said “actually wait, based on what you’ve said what you experienced later was a natural continuation of the previous stuff, you’re very much understating things” and suggested (an early version of) the last paragraph of the basilisk section, and that seemed likely enough to include.
It’s pretty clear that thinking about basilisk-y scenarios in the abstract was part of MIRI’s agenda (e.g. the Arbital article). Here’s a comment by Rob Bensinger saying it’s probably bad to try to make an AI that does a lot of interesting stuff and has a good time doing it, because that objective is too related to consicousness and that might create a lot of suffering. (That statement references the “s-risk” concept and if someone doesn’t know what that is and tries to find out, they could easily end up at a Brian Tomasik article recommending thinking about what it’s like to be dropped in lava.)
The thing is it seems pretty hard to evaluate an abstract claim like Rob’s without thinking about details. I get that there are arguments against thinking about the details (e.g. it might drive you crazy or make you more extortable) but natural ways of thinking about the abstract question (e.g. imagination / pattern completion / concretization / etc) would involve thinking about details even if people at MIRI would in fact dis-endorse thinking about the details. It would require a lot of compartmentalization to think about this question in the abstract without thinking about the details, and some people are more disposed to do that than others, and I expect compartmentalization of that sort to cause worse FAI research, e.g. because it might lead to treating “human values” as a LISP token.
[EDIT: Just realized Buck Shlegeris (someone who recently left MIRI) recently wrote a post called “Worst-case thinking in AI alignment”… seems concordant with the point I’m making.]
hmm… this could have come down to spending time in different parts of MIRI? I mostly worked on the “world’s last decent logic department” stuff—maybe the more “global strategic” aspects of MIRI work, at least the parts behind closed doors I wasn’t allowed through, were more toxic? Still feels kinda unlikely but I’m missing info there so it’s just a hunch.
My guess is that it has more to do with willingness to compartmentalize than part of MIRI per se. Compartmentalization is negatively correlated with “taking on responsibility” for more of the problem. I’m sure you can see why it would be appealing to avoid giving into extortion in real life, not just on whiteboards, and attempting that with a skewed model of the situation can lead to outlandish behavior like Ziz resisting arrest as hard as possible.
I think this is a persistent difference between us but isn’t especially relevant to the difference in outcomes here.
I’d more guess that the reason you had psychoses and I didn’t had to do with you having anxieties about being irredeemably bad that I basically didn’t at the time. Seems like this would be correlated with your feeling like you grew up in a Shin Sekai Yori world?
I clearly had more scrupulosity issues than you and that contributed a lot. Relevantly, the original Roko’s Basilisk post is putting AI sci-fi detail on a fear I am pretty sure a lot of EAs feel/felt in their heart, that something nonspecifically bad will happen to them because they are able to help a lot of people (due to being pivotal on the future), and know this, and don’t do nearly as much as they could. If you’re already having these sorts of fears then the abstract math of extortion and so on can look really threatening.
When I got back into town and talked with Jessica, she was talking about how it might be wrong to take actions that might possibly harm others, i.e. pretty much any actions, since she might not learn fast enough for this to come out net positive. Seems likely to me that the content of Jessica’s anxious perseveration was partly causally upstream of the anxious perseveration itself.
I agree that a decline in bodily organization was the main legitimate reason for concern. It seems obviously legitimate for Jessica (and me) to point out that Scott is proposing a standard that cannot feasibly be applied uniformly, since it’s not already common knowledge that Scott isn’t making sense here, and his prior comments on this subject have been heavily upvoted. The main alternative would be to mostly stop engaging on LessWrong, which I have done.
I don’t fully understand what “latent tendency towards psychosis” means functionally or what predictions it makes, so it doesn’t seem like an adequate explanation. I do know that there’s correlation within families, but I have a family history of schizophrenia and Jessica doesn’t, so if that’s what you mean by latent tendency it doesn’t seem to obviously have an odds ratio in the correct direction within our local cluster.
By latent tendency I don’t mean family history, though it’s obviously correlated. I claim that there’s this fact of the matter about Jess’ personality, biology, etc, which is that it’s easier for her to have a psychotic episode than for most people. This seems not plausibly controversial.
I’m not claiming a gears-level model here. When you see that someone has a pattern of <problem> that others in very similar situations did not have, you should assume some of the causality is located in the person, even if you don’t know how.
Listing “I don’t know, some other reason we haven’t identified yet” as an “obvious source” can make sense as a null option, but giving it a virtus dormitiva type name is silly.
I think that Jessica has argued with some plausibility that her psychotic break was in part the result of taking aspects of the AI safety discourse more seriously and unironically than the people around her, combined with adversarial pressures and silencing. This seems like a gears-level model that might be more likely in people with a cognitive disposition correlated with psychosis.
Agreed.
Agreed during October 2017. Disagreed substantially before then (January-June 2017, when I was at MIRI).
(I edited the post to make it clear how I misinterpreted your comment.)
Interpreting you as saying that January-June 2017 you were basically doing the same thing as the Leveragers when talking about demons and had no other signs of psychosis, I agree this was not a psychiatric emergency, and I’m sorry if I got confused and suggested it was. I’ve edited my post also.
One thing to add is I think in the early parts of my psychosis (before the “mind blown by Ra” part) I was as coherent or more coherent than hippies are on regular days, and even after that for some time (before actually being hospitalized) I might have been as coherent as they were on “advanced spiritual practice” days (e.g. middle of a meditation retreat or experiencing Kundalini awakening). I was still controlled pretty aggressively with the justification that I was being incoherent, and I think that control caused me to become more mentally disorganized and verbally incoherent over time. The math test example is striking, I think less than 0.2% of people could pass it (to Zack’s satisfaction) on a good day, and less than 3% could give an answer as good as the one I gave, yet this was still used to “prove” that I was unable to reason.
My recollection is that at that time you were articulately expressing what seemed like a level of scrupulosity typical of many Bay Area Rationalists. You were missing enough sleep that I was worried, but you seemed oriented x3. I don’t remember you talking about demons or auras at all, and have no recollection of you confusedly reifying agents who weren’t there.