Sorry, yes, I meant the psychosis was emergency. Non-psychotic discussion of auras/demons isn’t.
I’m kind of unclear what we’re debating now.
I interpret us as both agreeing that there are people talking about auras and demons who are not having psychiatric emergencies (eg random hippies, Catholic exorcists), and they should not be bothered, except insofar as you feel like having rational arguments about it.
I interpret us as both agreeing that you were having a psychotic episode, that you were going further / sounded less coherent than the hippies and Catholics, and that some hypothetical good diagnostician / good friend should have noticed that and suggested you seek help.
Am I right that we agree on those two points? Can you clarify what you think our crux is?
Verbal coherence level seems like a weird place to locate the disagreement—Jessica maintained approximate verbal coherence (though with increasing difficulty) through most of her episode. I’d say even in October 2017, she was more verbally coherent than e.g. the average hippie or Catholic, because she was trying at all.
The most striking feature was actually her ability to take care of herself rapidly degrading, as evidenced by e.g. getting lost almost immediately after leaving her home, wandering for several miles, then calling me for help and having difficulty figuring out where she was—IIRC, took a few minutes to find cross streets. When I found her she was shuffling around in a daze, her skin looked like she’d been scratching it much more than usual, clothes were awkwardly hung on her body, etc. This was on either the second or third day, and things got almost monotonically worse as the days progressed.
The obvious cause for concern was “rapid descent in presentation from normal adult to homeless junkie”. Before that happened, it was not at all obvious this was an emergency. Who hasn’t been kept up all night by anxiety after a particularly stressful day in a stressful year?
I think the focus on verbal coherence is politically convenient for both of you. It makes this case into an interesting battleground for competing ideologies, where they can both try to create blame for a bad thing.
Scott wants to do this because AFAICT his agenda is to marginalize discussion of concepts from woo / psychedelia / etc, and would like to claim that Jess’ interest in those was a clear emergency. Jess wants to do this because she would like to claim that the ideas at MIRI directly drove her crazy.
I worked there too, and left at the same time for approximately the same reasons. We talked about it extensively at the time. It’s not plausible that it was even in-frame that considering details of S-risks in the vein of Unsong’s Broadcast would possibly be helpful for alignment research. Basilisk-baiting like that would generally have been frowned upon, but mostly just wouldn’t have come up.
The obvious sources of madness here were
The extreme burden of responsibility for the far future (combined with the position that MIRI was uniquely essential to this), and encouragement to take this responsibility seriously, is obviously stressful.
The local political environment at the time was a mess—splinters were forming, paranoia was widespread. A bunch of people we respected and worked with had decided the world was going to end, very soon, uncomfortably soon, and they were making it extremely difficult for us to check their work. This uncertainty was, uh, stressful.
Psychedelics very obviously induces states closer-than-usual to psychosis. This is what’s great about them—they let you dip a toe into the psychotic world and be back the next day, so you can take some of the insights with you. Also, this makes them a risk for inducing psychotic episodes. It’s not a coincidence that every episode I remember Jess having in 2017 and 2018 was a direct result of a trip-gone-long.
Latent tendency towards psychosis
Critically, I don’t think any of these factors would have been sufficient on their own. The direct content of MIRI’s research, and the woo stuff, both seem like total red herrings in comparison to any of these 4 issues.
I want to specifically highlight “A bunch of people we respected and worked with had decided the world was going to end, very soon, uncomfortably soon, and they were making it extremely difficult for us to check their work.” I noticed this second-hand at the time, but didn’t see any paths toward making things better. I think it had a really harmful effects on the community, and is worth thinking a lot about before something similar happens again.
Thanks for giving your own model and description of the situation!
Regarding latent tendency, I don’t have a family history of psychosis (but I do of bipolar), although that doesn’t rule out latent tendency. It’s unclear what “latent tendency” means exactly, it’s kind of pretending that the real world is a 3-node Bayesian network (self tendency towards X, environment tendency towards inducing X, whether X actually happens) rather than a giant web of causality, but maybe there’s some way to specify it more precisely.
I think the 4 factors you listed are the vast majority, so I partially agree with your “red herring” claim.
The “woo” language was causal, I think, mostly because I feared that others would apply to coercion to me if I used it too much (even if I had a more detailed model that I could explain upon request), and there was a bad feedback loop around thinking that I was crazy and/or other people would think I was crazy, and other people playing into this.
I think I originally wrote about basilisk type things in the post because I was very clearly freaking out about abstract evil at the time of psychosis (basically a generalization of utility function sign flips), and I thought Scott’s original comment would have led people to think I was thinking about evil mainly because of Michael, when actually I was thinking about evil for a variety of reasons. I was originally going to say “maybe all this modeling of adversarial/evil scenarios at my workplace contributed, but I’m not sure” but an early reader said “actually wait, based on what you’ve said what you experienced later was a natural continuation of the previous stuff, you’re very much understating things” and suggested (an early version of) the last paragraph of the basilisk section, and that seemed likely enough to include.
It’s pretty clear that thinking about basilisk-y scenarios in the abstract was part of MIRI’s agenda (e.g. the Arbital article). Here’s a comment by Rob Bensinger saying it’s probably bad to try to make an AI that does a lot of interesting stuff and has a good time doing it, because that objective is too related to consicousness and that might create a lot of suffering. (That statement references the “s-risk” concept and if someone doesn’t know what that is and tries to find out, they could easily end up at a Brian Tomasik article recommending thinking about what it’s like to be dropped in lava.)
The thing is it seems pretty hard to evaluate an abstract claim like Rob’s without thinking about details. I get that there are arguments against thinking about the details (e.g. it might drive you crazy or make you more extortable) but natural ways of thinking about the abstract question (e.g. imagination / pattern completion / concretization / etc) would involve thinking about details even if people at MIRI would in fact dis-endorse thinking about the details. It would require a lot of compartmentalization to think about this question in the abstract without thinking about the details, and some people are more disposed to do that than others, and I expect compartmentalization of that sort to cause worse FAI research, e.g. because it might lead to treating “human values” as a LISP token.
[EDIT: Just realized Buck Shlegeris (someone who recently left MIRI) recently wrote a post called “Worst-case thinking in AI alignment”… seems concordant with the point I’m making.]
hmm… this could have come down to spending time in different parts of MIRI? I mostly worked on the “world’s last decent logic department” stuff—maybe the more “global strategic” aspects of MIRI work, at least the parts behind closed doors I wasn’t allowed through, were more toxic? Still feels kinda unlikely but I’m missing info there so it’s just a hunch.
My guess is that it has more to do with willingness to compartmentalize than part of MIRI per se. Compartmentalization is negatively correlated with “taking on responsibility” for more of the problem. I’m sure you can see why it would be appealing to avoid giving into extortion in real life, not just on whiteboards, and attempting that with a skewed model of the situation can lead to outlandish behavior like Ziz resisting arrest as hard as possible.
I think this is a persistent difference between us but isn’t especially relevant to the difference in outcomes here.
I’d more guess that the reason you had psychoses and I didn’t had to do with you having anxieties about being irredeemably bad that I basically didn’t at the time. Seems like this would be correlated with your feeling like you grew up in a Shin Sekai Yori world?
I clearly had more scrupulosity issues than you and that contributed a lot. Relevantly, the original Roko’s Basilisk post is putting AI sci-fi detail on a fear I am pretty sure a lot of EAs feel/felt in their heart, that something nonspecifically bad will happen to them because they are able to help a lot of people (due to being pivotal on the future), and know this, and don’t do nearly as much as they could. If you’re already having these sorts of fears then the abstract math of extortion and so on can look really threatening.
When I got back into town and talked with Jessica, she was talking about how it might be wrong to take actions that might possibly harm others, i.e. pretty much any actions, since she might not learn fast enough for this to come out net positive. Seems likely to me that the content of Jessica’s anxious perseveration was partly causally upstream of the anxious perseveration itself.
I agree that a decline in bodily organization was the main legitimate reason for concern. It seems obviously legitimate for Jessica (and me) to point out that Scott is proposing a standard that cannot feasibly be applied uniformly, since it’s not already common knowledge that Scott isn’t making sense here, and his prior comments on this subject have been heavily upvoted. The main alternative would be to mostly stop engaging on LessWrong, which I have done.
I don’t fully understand what “latent tendency towards psychosis” means functionally or what predictions it makes, so it doesn’t seem like an adequate explanation. I do know that there’s correlation within families, but I have a family history of schizophrenia and Jessica doesn’t, so if that’s what you mean by latent tendency it doesn’t seem to obviously have an odds ratio in the correct direction within our local cluster.
By latent tendency I don’t mean family history, though it’s obviously correlated. I claim that there’s this fact of the matter about Jess’ personality, biology, etc, which is that it’s easier for her to have a psychotic episode than for most people. This seems not plausibly controversial.
I’m not claiming a gears-level model here. When you see that someone has a pattern of <problem> that others in very similar situations did not have, you should assume some of the causality is located in the person, even if you don’t know how.
Listing “I don’t know, some other reason we haven’t identified yet” as an “obvious source” can make sense as a null option, but giving it a virtus dormitiva type name is silly.
I think that Jessica has argued with some plausibility that her psychotic break was in part the result of taking aspects of the AI safety discourse more seriously and unironically than the people around her, combined with adversarial pressures and silencing. This seems like a gears-level model that might be more likely in people with a cognitive disposition correlated with psychosis.
I interpret us as both agreeing that there are people talking about auras who are not having psychiatric emergencies (eg random hippies), and they should not be bothered.
Agreed.
I interpret us as both agreeing that you were having a psychotic episode, that you were going further / sounded less coherent than the hippies, and that some hypothetical good diagnostician / good friend should have noticed that and suggested you seek help.
Agreed during October 2017. Disagreed substantially before then (January-June 2017, when I was at MIRI).
(I edited the post to make it clear how I misinterpreted your comment.)
Interpreting you as saying that January-June 2017 you were basically doing the same thing as the Leveragers when talking about demons and had no other signs of psychosis, I agree this was not a psychiatric emergency, and I’m sorry if I got confused and suggested it was. I’ve edited my post also.
One thing to add is I think in the early parts of my psychosis (before the “mind blown by Ra” part) I was as coherent or more coherent than hippies are on regular days, and even after that for some time (before actually being hospitalized) I might have been as coherent as they were on “advanced spiritual practice” days (e.g. middle of a meditation retreat or experiencing Kundalini awakening). I was still controlled pretty aggressively with the justification that I was being incoherent, and I think that control caused me to become more mentally disorganized and verbally incoherent over time. The math test example is striking, I think less than 0.2% of people could pass it (to Zack’s satisfaction) on a good day, and less than 3% could give an answer as good as the one I gave, yet this was still used to “prove” that I was unable to reason.
My recollection is that at that time you were articulately expressing what seemed like a level of scrupulosity typical of many Bay Area Rationalists. You were missing enough sleep that I was worried, but you seemed oriented x3. I don’t remember you talking about demons or auras at all, and have no recollection of you confusedly reifying agents who weren’t there.
Sorry, yes, I meant the psychosis was emergency. Non-psychotic discussion of auras/demons isn’t.
I’m kind of unclear what we’re debating now.
I interpret us as both agreeing that there are people talking about auras and demons who are not having psychiatric emergencies (eg random hippies, Catholic exorcists), and they should not be bothered, except insofar as you feel like having rational arguments about it.
I interpret us as both agreeing that you were having a psychotic episode, that you were going further / sounded less coherent than the hippies and Catholics, and that some hypothetical good diagnostician / good friend should have noticed that and suggested you seek help.
Am I right that we agree on those two points? Can you clarify what you think our crux is?
Verbal coherence level seems like a weird place to locate the disagreement—Jessica maintained approximate verbal coherence (though with increasing difficulty) through most of her episode. I’d say even in October 2017, she was more verbally coherent than e.g. the average hippie or Catholic, because she was trying at all.
The most striking feature was actually her ability to take care of herself rapidly degrading, as evidenced by e.g. getting lost almost immediately after leaving her home, wandering for several miles, then calling me for help and having difficulty figuring out where she was—IIRC, took a few minutes to find cross streets. When I found her she was shuffling around in a daze, her skin looked like she’d been scratching it much more than usual, clothes were awkwardly hung on her body, etc. This was on either the second or third day, and things got almost monotonically worse as the days progressed.
The obvious cause for concern was “rapid descent in presentation from normal adult to homeless junkie”. Before that happened, it was not at all obvious this was an emergency. Who hasn’t been kept up all night by anxiety after a particularly stressful day in a stressful year?
I think the focus on verbal coherence is politically convenient for both of you. It makes this case into an interesting battleground for competing ideologies, where they can both try to create blame for a bad thing.
Scott wants to do this because AFAICT his agenda is to marginalize discussion of concepts from woo / psychedelia / etc, and would like to claim that Jess’ interest in those was a clear emergency. Jess wants to do this because she would like to claim that the ideas at MIRI directly drove her crazy.
I worked there too, and left at the same time for approximately the same reasons. We talked about it extensively at the time. It’s not plausible that it was even in-frame that considering details of S-risks in the vein of Unsong’s Broadcast would possibly be helpful for alignment research. Basilisk-baiting like that would generally have been frowned upon, but mostly just wouldn’t have come up.
The obvious sources of madness here were
The extreme burden of responsibility for the far future (combined with the position that MIRI was uniquely essential to this), and encouragement to take this responsibility seriously, is obviously stressful.
The local political environment at the time was a mess—splinters were forming, paranoia was widespread. A bunch of people we respected and worked with had decided the world was going to end, very soon, uncomfortably soon, and they were making it extremely difficult for us to check their work. This uncertainty was, uh, stressful.
Psychedelics very obviously induces states closer-than-usual to psychosis. This is what’s great about them—they let you dip a toe into the psychotic world and be back the next day, so you can take some of the insights with you. Also, this makes them a risk for inducing psychotic episodes. It’s not a coincidence that every episode I remember Jess having in 2017 and 2018 was a direct result of a trip-gone-long.
Latent tendency towards psychosis
Critically, I don’t think any of these factors would have been sufficient on their own. The direct content of MIRI’s research, and the woo stuff, both seem like total red herrings in comparison to any of these 4 issues.
I want to specifically highlight “A bunch of people we respected and worked with had decided the world was going to end, very soon, uncomfortably soon, and they were making it extremely difficult for us to check their work.” I noticed this second-hand at the time, but didn’t see any paths toward making things better. I think it had a really harmful effects on the community, and is worth thinking a lot about before something similar happens again.
Thanks for giving your own model and description of the situation!
Regarding latent tendency, I don’t have a family history of psychosis (but I do of bipolar), although that doesn’t rule out latent tendency. It’s unclear what “latent tendency” means exactly, it’s kind of pretending that the real world is a 3-node Bayesian network (self tendency towards X, environment tendency towards inducing X, whether X actually happens) rather than a giant web of causality, but maybe there’s some way to specify it more precisely.
I think the 4 factors you listed are the vast majority, so I partially agree with your “red herring” claim.
The “woo” language was causal, I think, mostly because I feared that others would apply to coercion to me if I used it too much (even if I had a more detailed model that I could explain upon request), and there was a bad feedback loop around thinking that I was crazy and/or other people would think I was crazy, and other people playing into this.
I think I originally wrote about basilisk type things in the post because I was very clearly freaking out about abstract evil at the time of psychosis (basically a generalization of utility function sign flips), and I thought Scott’s original comment would have led people to think I was thinking about evil mainly because of Michael, when actually I was thinking about evil for a variety of reasons. I was originally going to say “maybe all this modeling of adversarial/evil scenarios at my workplace contributed, but I’m not sure” but an early reader said “actually wait, based on what you’ve said what you experienced later was a natural continuation of the previous stuff, you’re very much understating things” and suggested (an early version of) the last paragraph of the basilisk section, and that seemed likely enough to include.
It’s pretty clear that thinking about basilisk-y scenarios in the abstract was part of MIRI’s agenda (e.g. the Arbital article). Here’s a comment by Rob Bensinger saying it’s probably bad to try to make an AI that does a lot of interesting stuff and has a good time doing it, because that objective is too related to consicousness and that might create a lot of suffering. (That statement references the “s-risk” concept and if someone doesn’t know what that is and tries to find out, they could easily end up at a Brian Tomasik article recommending thinking about what it’s like to be dropped in lava.)
The thing is it seems pretty hard to evaluate an abstract claim like Rob’s without thinking about details. I get that there are arguments against thinking about the details (e.g. it might drive you crazy or make you more extortable) but natural ways of thinking about the abstract question (e.g. imagination / pattern completion / concretization / etc) would involve thinking about details even if people at MIRI would in fact dis-endorse thinking about the details. It would require a lot of compartmentalization to think about this question in the abstract without thinking about the details, and some people are more disposed to do that than others, and I expect compartmentalization of that sort to cause worse FAI research, e.g. because it might lead to treating “human values” as a LISP token.
[EDIT: Just realized Buck Shlegeris (someone who recently left MIRI) recently wrote a post called “Worst-case thinking in AI alignment”… seems concordant with the point I’m making.]
hmm… this could have come down to spending time in different parts of MIRI? I mostly worked on the “world’s last decent logic department” stuff—maybe the more “global strategic” aspects of MIRI work, at least the parts behind closed doors I wasn’t allowed through, were more toxic? Still feels kinda unlikely but I’m missing info there so it’s just a hunch.
My guess is that it has more to do with willingness to compartmentalize than part of MIRI per se. Compartmentalization is negatively correlated with “taking on responsibility” for more of the problem. I’m sure you can see why it would be appealing to avoid giving into extortion in real life, not just on whiteboards, and attempting that with a skewed model of the situation can lead to outlandish behavior like Ziz resisting arrest as hard as possible.
I think this is a persistent difference between us but isn’t especially relevant to the difference in outcomes here.
I’d more guess that the reason you had psychoses and I didn’t had to do with you having anxieties about being irredeemably bad that I basically didn’t at the time. Seems like this would be correlated with your feeling like you grew up in a Shin Sekai Yori world?
I clearly had more scrupulosity issues than you and that contributed a lot. Relevantly, the original Roko’s Basilisk post is putting AI sci-fi detail on a fear I am pretty sure a lot of EAs feel/felt in their heart, that something nonspecifically bad will happen to them because they are able to help a lot of people (due to being pivotal on the future), and know this, and don’t do nearly as much as they could. If you’re already having these sorts of fears then the abstract math of extortion and so on can look really threatening.
When I got back into town and talked with Jessica, she was talking about how it might be wrong to take actions that might possibly harm others, i.e. pretty much any actions, since she might not learn fast enough for this to come out net positive. Seems likely to me that the content of Jessica’s anxious perseveration was partly causally upstream of the anxious perseveration itself.
I agree that a decline in bodily organization was the main legitimate reason for concern. It seems obviously legitimate for Jessica (and me) to point out that Scott is proposing a standard that cannot feasibly be applied uniformly, since it’s not already common knowledge that Scott isn’t making sense here, and his prior comments on this subject have been heavily upvoted. The main alternative would be to mostly stop engaging on LessWrong, which I have done.
I don’t fully understand what “latent tendency towards psychosis” means functionally or what predictions it makes, so it doesn’t seem like an adequate explanation. I do know that there’s correlation within families, but I have a family history of schizophrenia and Jessica doesn’t, so if that’s what you mean by latent tendency it doesn’t seem to obviously have an odds ratio in the correct direction within our local cluster.
By latent tendency I don’t mean family history, though it’s obviously correlated. I claim that there’s this fact of the matter about Jess’ personality, biology, etc, which is that it’s easier for her to have a psychotic episode than for most people. This seems not plausibly controversial.
I’m not claiming a gears-level model here. When you see that someone has a pattern of <problem> that others in very similar situations did not have, you should assume some of the causality is located in the person, even if you don’t know how.
Listing “I don’t know, some other reason we haven’t identified yet” as an “obvious source” can make sense as a null option, but giving it a virtus dormitiva type name is silly.
I think that Jessica has argued with some plausibility that her psychotic break was in part the result of taking aspects of the AI safety discourse more seriously and unironically than the people around her, combined with adversarial pressures and silencing. This seems like a gears-level model that might be more likely in people with a cognitive disposition correlated with psychosis.
Agreed.
Agreed during October 2017. Disagreed substantially before then (January-June 2017, when I was at MIRI).
(I edited the post to make it clear how I misinterpreted your comment.)
Interpreting you as saying that January-June 2017 you were basically doing the same thing as the Leveragers when talking about demons and had no other signs of psychosis, I agree this was not a psychiatric emergency, and I’m sorry if I got confused and suggested it was. I’ve edited my post also.
One thing to add is I think in the early parts of my psychosis (before the “mind blown by Ra” part) I was as coherent or more coherent than hippies are on regular days, and even after that for some time (before actually being hospitalized) I might have been as coherent as they were on “advanced spiritual practice” days (e.g. middle of a meditation retreat or experiencing Kundalini awakening). I was still controlled pretty aggressively with the justification that I was being incoherent, and I think that control caused me to become more mentally disorganized and verbally incoherent over time. The math test example is striking, I think less than 0.2% of people could pass it (to Zack’s satisfaction) on a good day, and less than 3% could give an answer as good as the one I gave, yet this was still used to “prove” that I was unable to reason.
My recollection is that at that time you were articulately expressing what seemed like a level of scrupulosity typical of many Bay Area Rationalists. You were missing enough sleep that I was worried, but you seemed oriented x3. I don’t remember you talking about demons or auras at all, and have no recollection of you confusedly reifying agents who weren’t there.