**my current belief is that Jessica is doing a kinda a combination of “presenting things in a frame that looks pretty optimized as a confusing, hard-to-respond-to-social-attack”, which at first looked disingenuous to me. I’ve since seen her respond to a number of comments in a way that looks concretely like it’s trying to figure stuff out, update on new information, etc, without optimizing for maintaining the veiled social attack. My current belief is that there’s still some kind of unconscious veiled social attack going on
Seconded (though I think “pretty optimized” is too strong).
that Jessica doesn’t have full conscious access to, it looks too optimized to be an accident. But I don’t know that Jessica from within her current epistemic state should agree with me.
My wild, not well-founded guess is that Jessica does have some conscious access to this and could fairly easily say more about what’s going on with her (and maybe has and I / we forgot?), along the lines of “owning” stuff (and that might help people hear her better by making possible resulting conflicts and anti-epistemology more available to talk about). I wonder if Jessica is in / views herself as in a conflict, such that saying that would be, or intuitively seem like, a self-harming or anti-epistemic thing to do. As a hypothetical example, saying “I’m hurt and angry and I don’t want to misdirect my anger and I want to figure things out and get the facts straight and I do want to direct my anger to correct targets” without further comment would to some real or imagined listeners sort of sound like “yeah don’t take anything I’m saying seriously, I’m just slinging mud because I’m mad, go ahead and believe the story that minimizes fault rather than what’s most consistent with the facts”. In other words, “owning one’s experience” is bucketed with failing to stand one’s ground / hold a perspective? (Not sure in which person/s the supposed bucketing is happening.)
I got a lot of the material for this by trying to explain what I experienced to a “normal” person who wasn’t part of the scene while feeling free to be emotionally expressive (e.g. by screaming). Afterwards I found a new “voice” to talk about the problems in an annoyed way. I think this was really good for healing trauma and recovering memories.
I have a political motive to prevent Michael from being singled out as the person who caused my psychosis since he’s my friend. I in fact don’t think he was a primary cause, so this isn’t inherently anti-epistemic, but it likely caused me to write in a more lawyer-y fashion than I otherwise would. (Michael definitely didn’t prompt me to write the first draft of the document, and only wrote a few comments on the post.)
I’ve been working on this document for 1-2 weeks and doing a rolling release where I add more people to the document, it’s been somewhat stressful getting the memories/interpretations into written form without making false/misleading/indefensible statements along the way, or unnecessarily harming the reputations of people I care about the reputations of.
Some other people helped me edit this. I included some text from them without checking that it was as rigorous as the rest of the text I wrote. I think they made a different tradeoff than me in terms of saying “strong” statements that were less provable and potentially more accusatory, e.g. the one-paragraph overall summary was something I couldn’t have written myself because even as my co-writer was saying it out loud, I was having trouble tracking the semantics and thought it was for trauma-related reasons. I included the paragraph partially since it seemed true (even if hard-to-prove) on reflection and I was inhibited from myself writing a similar paragraph.
In future posts I’ll rewrite more inclusions in my own words, since that way I can filter better for things I think I can actually rhetorically defend if pressed.
I originally wrote the post without the summary of core claims, a friend/co-editor pointed out that a lot of people wouldn’t read the whole thing, and it would be easier to follow with a summary, so I added it.
Raemon is right that I think people are being overly defensive and finding excuses to reject the information. Overall it seems like people are paying much, much more attention to the quality of my rhetoric than the subject matter the post is about, and that seems like a large problem for improving the problems I’m describing. I wrote the following tweet about it: “Suppose you publish a criticism of X movement/organization/etc. The people closest to the center are the most likely to defensively reject the information. People far away are unlikely to understand or care about X. It’s people at middle distance who appreciate it the most.” In fact multiple people somewhat distant from the scene have said they really liked my post, one said he found it helpful for having a more healthy relationship with EA and rationality.
Overall it seems like people are paying much, much more attention to the quality of my rhetoric than the subject matter the post is about
Just to be clear, I’m paying attention to the quality of your rhetoric because I cannot tell what the subject matter is supposed to be.
Upon being unable to actually distill out a set of clear claims, I fell back onto “okay, well, what sorts of conclusions would I be likely to draw if I just drank this all in trustingly/unquestioningly/uncritically?”
Like, “observe the result, and then assume (as a working hypothesis, held lightly) that the result is what was intended.”
And then, once I had that, I went looking to see whether it was justified/whether the post presented any actual reasons for me to believe what it left sandbox-Duncan believing, and found that the answer was basically “no.”
Which seems like a problem, for something that’s 13000 words long and that multiple people apparently put a lot of effort into. 13000 words on LessWrong should not, in my opinion, have the properties of:
a) not having a discernible thesis
b) leaving a clear impression on the reader
c) that impression, upon revisit/explicit evaluation, seeming really quite false
I think it’s actually quite good that you felt called to defend your friend Michael, who (after reading) does in fact seem to me to be basically innocent w/r/t your episode. I think you could have “cleared Michael of all charges” in a much more direct and simple post that would have been compelling to me and others looking in; it seems like that’s maybe 1⁄5 of what’s going on in the above, and it’s scattered throughout, piecemeal. I’m not sure what you think the other 4⁄5 is doing, or why you wanted it there.
(I mean this straightforwardly—that I am not sure. I don’t mean it in an attacky fashion, like me not being sure implies that there’s no good reason or whatever.)
getting the memories/interpretations into written form without making false/misleading/indefensible statements along the way
I believe and appreciate the fact that you were attending to and cared about this, separate from the fact that I unfortunately do not believe you succeeded.
EDIT: The reason I leave this comment is because I sense a sort of … trend toward treating the critique as if it’s missing the point? And while it might be missing Jessica’s point, I do not think that the critique was about trivial matters while not attending to serious ones. I think the critique was very much centered on Stuff I Think Actually Matters.
FWIW, I consider myself to be precisely one of those “middle distance” people. I wasn’t around you during this time at MIRI, I’m not particularly invested in defending MIRI (except to the degree which I believe MIRI is innocent in any given situation and therefore am actually invested in Defending An Innocent Party, which is different), and I have a very mixed-bag relationship with EA and rationality; solid criticisms of the EA/LW/rationality sphere usually find me quite sympathetic. I’m looking in as someone who would have appreciated a good clear post highlighting things worth feeling concern over. I really wished this and its precursor were e.g. tight analogues to the Zoe post, which I similarly appreciated as a middle distancer.
I really liked MIRI/CFAR during 2015-2016 (even though I had lots of criticisms), I think I benefited a lot overall, I think things got bad in 2017 and haven’t recovered. E.g. MIRI has had many fewer good publications since 2017 and for reasons I’ve expressed, I don’t believe their private research is comparably good to their previous public research. (Maybe to some extent I got disillusioned so I’m overestimating how much things changed, I’m not entirely sure how to disentangle.)
As revealed in my posts, I was a “dissident” during 2017 and confusedly/fearfully trying to learn and share critiques, gather people into a splinter group, etc, so there’s somewhat of a legacy of a past conflict affecting the present, although it’s obviously less intense now, especially after I can write about it.
I’ve noticed people trying to “center” everything around MIRI, justifying their actions in terms of “helping MIRI” etc (one LW mod told me and others in 2018 that LessWrong was primarily a recruiting funnel for MIRI, not a rationality promotion website, and someone else who was in the scene 2016-2017 corroborated that this is a common opinion), and I think this is pretty bad since they have no way of checking how useful MIRI’s work is, and there’s a market for lemons (compare EA arguments against donating to even “reputable” organizations like UNICEF). It resembles idol worship and that’s disappointing.
This is corroborated by some other former MIRI employees, e.g. someone who left sometime in the past 2 years who agreed with someone else’s characterization that MIRI was acting against its original mission.
I think lots of individuals at MIRI are intellectually productive and/or high-potential but pretty confused about a lot of things. I don’t currently see a more efficient way to communicate with them than by writing things on the Internet.
I have a long-standing disagreement about AI timelines (I wrote a post saying people are grossly distorting things, which I believe and think is important partially due to the content of my recent posts about my experiences; Anna commented that the post was written in a “triggered” mind state which seems pretty likely given the traumatic events I’ve described). I think lots of people are getting freaked out about the world ending soon and this is wrong and bad for their health. It’s like in Wild Wild Country where the leader becomes increasingly isolated and starts making nearer-term doom predictions while the second-in-command becomes the de-facto social leader (this isn’t an exact analogy and I would be inhibited from making it except that I’m specifically being asked about my political motives, I’m not saying I have a good argument for this).
I still think AI risk is a problem in the long term but I have a broader idea of what “AI alignment research” is, e.g. it includes things that would fall under philosophy/the humanities. I think the problem is really hard and people have to think in inter-disciplinary ways to actually come close to solving it (or to get one of the best achievable partial solutions). I think MIRI is drawing attention to a lot of the difficulties with the problem and that’s good even if I don’t think they can solve it.
Someone I know pointed out that Eliezer’s model might indicate that the AI alignment field has been overall net negative due to it sparking OpenAI and due to MIRI currently having no good plans. If that’s true it seems like a large change in the overall AI safety/x-risk space would be warranted.
My friends and I have been talking with Anna Salamon (head of CFAR) more over this year, she’s been talking about a lot of the problems that have happened historically and how she intends to do different things going forward, and that seems like a good sign but she isn’t past the threshold of willing+able she would need to be to fix the scene herself.
I’m somewhat worried about criticizing these orgs too hard because I want to maintain relations with people in my previous social network, because I don’t actually think they’re especially bad, because my org (mediangroup.org) has previously gotten funding from a re-granting organization whose representative told me that my org is more likely to get funding if I write fewer “accusatory” blog posts (although, I’m not sure if I believe them about this at this time, maybe writing critiques causes people to think I’m more important and fund me more?), because it might spark “retaliation” (which need not be illegal, e.g. maybe people just criticize me a bunch in a way that’s embarrassing, or give me less money). I feel weird criticizing orgs that were as good for my career as they were even though that doesn’t make that much sense from an ethical perspective.
I very much don’t think the central orgs can accomplish their goals if they can’t learn from criticisms. A lot of the time I’m more comfortable in rat-adjacent/postrat-ish/non-rationalist spaces than central rationalist spaces because they are less enamored by the ideology and the central institutions. It’s easier to just attend a party and say lots of weird-but-potentially-revelatory things without getting caught in a bunch of defensiveness related to the history of the scene. One issue with these alternative social settings is that a lot of these people think it’s normal to take ideas less seriously in general so they think e.g. I’m only speaking out about problems because I have a high level of “autism” and it’s too much to expect people to tell the truth when their rent stream depends on them not acknowledging it. I understand how someone could come to this perspective but it seems somewhat of a figure-ground inversion that normalizes parasitic behavior.
I found this a very helpful and useful comment, and resonate with various bits of it (I also think I disagree with a good chunk of it, but a lot of it seems right overall).
I’m curious which parts resonate most with you (I’d ordinarily not ask this because it would seem rude, but I’m in a revealing-political-motives mood and figure the actual amount of pressure is pretty low).
I share the sense that something pretty substantial changed with MIRI in ~2017 and that something important got lost when that happened. I share some of the sense that people’s thinking about timelines is confused, though I do think overall pretty short timelines are justified (though mine are on the longer end of what MIRI people tend to think, though much shorter than yours, IIRC). I think you are saying some important things about the funding landscape, and have been pretty sad about the dynamics here as well, though I think the actual situation is pretty messy and some funders are really quite pro-critique, and some others seem to me to be much more optimizing for something like the brand of the EA-coalition.
I feel like this topic may deserve a top-level post (rather than an N-th level comment here).
EDIT: I specifically meant the “MIRI in ~2017” topic, although I am generally in favor of extracting all other topics from Jessica’s post in a way that would be easier for me to read.
Seconded (though I think “pretty optimized” is too strong).
My wild, not well-founded guess is that Jessica does have some conscious access to this and could fairly easily say more about what’s going on with her (and maybe has and I / we forgot?), along the lines of “owning” stuff (and that might help people hear her better by making possible resulting conflicts and anti-epistemology more available to talk about). I wonder if Jessica is in / views herself as in a conflict, such that saying that would be, or intuitively seem like, a self-harming or anti-epistemic thing to do. As a hypothetical example, saying “I’m hurt and angry and I don’t want to misdirect my anger and I want to figure things out and get the facts straight and I do want to direct my anger to correct targets” without further comment would to some real or imagined listeners sort of sound like “yeah don’t take anything I’m saying seriously, I’m just slinging mud because I’m mad, go ahead and believe the story that minimizes fault rather than what’s most consistent with the facts”. In other words, “owning one’s experience” is bucketed with failing to stand one’s ground / hold a perspective? (Not sure in which person/s the supposed bucketing is happening.)
For some context:
I got a lot of the material for this by trying to explain what I experienced to a “normal” person who wasn’t part of the scene while feeling free to be emotionally expressive (e.g. by screaming). Afterwards I found a new “voice” to talk about the problems in an annoyed way. I think this was really good for healing trauma and recovering memories.
I have a political motive to prevent Michael from being singled out as the person who caused my psychosis since he’s my friend. I in fact don’t think he was a primary cause, so this isn’t inherently anti-epistemic, but it likely caused me to write in a more lawyer-y fashion than I otherwise would. (Michael definitely didn’t prompt me to write the first draft of the document, and only wrote a few comments on the post.)
I’ve been working on this document for 1-2 weeks and doing a rolling release where I add more people to the document, it’s been somewhat stressful getting the memories/interpretations into written form without making false/misleading/indefensible statements along the way, or unnecessarily harming the reputations of people I care about the reputations of.
Some other people helped me edit this. I included some text from them without checking that it was as rigorous as the rest of the text I wrote. I think they made a different tradeoff than me in terms of saying “strong” statements that were less provable and potentially more accusatory, e.g. the one-paragraph overall summary was something I couldn’t have written myself because even as my co-writer was saying it out loud, I was having trouble tracking the semantics and thought it was for trauma-related reasons. I included the paragraph partially since it seemed true (even if hard-to-prove) on reflection and I was inhibited from myself writing a similar paragraph.
In future posts I’ll rewrite more inclusions in my own words, since that way I can filter better for things I think I can actually rhetorically defend if pressed.
I originally wrote the post without the summary of core claims, a friend/co-editor pointed out that a lot of people wouldn’t read the whole thing, and it would be easier to follow with a summary, so I added it.
Raemon is right that I think people are being overly defensive and finding excuses to reject the information. Overall it seems like people are paying much, much more attention to the quality of my rhetoric than the subject matter the post is about, and that seems like a large problem for improving the problems I’m describing. I wrote the following tweet about it: “Suppose you publish a criticism of X movement/organization/etc. The people closest to the center are the most likely to defensively reject the information. People far away are unlikely to understand or care about X. It’s people at middle distance who appreciate it the most.” In fact multiple people somewhat distant from the scene have said they really liked my post, one said he found it helpful for having a more healthy relationship with EA and rationality.
Just to be clear, I’m paying attention to the quality of your rhetoric because I cannot tell what the subject matter is supposed to be.
Upon being unable to actually distill out a set of clear claims, I fell back onto “okay, well, what sorts of conclusions would I be likely to draw if I just drank this all in trustingly/unquestioningly/uncritically?”
Like, “observe the result, and then assume (as a working hypothesis, held lightly) that the result is what was intended.”
And then, once I had that, I went looking to see whether it was justified/whether the post presented any actual reasons for me to believe what it left sandbox-Duncan believing, and found that the answer was basically “no.”
Which seems like a problem, for something that’s 13000 words long and that multiple people apparently put a lot of effort into. 13000 words on LessWrong should not, in my opinion, have the properties of:
a) not having a discernible thesis
b) leaving a clear impression on the reader
c) that impression, upon revisit/explicit evaluation, seeming really quite false
I think it’s actually quite good that you felt called to defend your friend Michael, who (after reading) does in fact seem to me to be basically innocent w/r/t your episode. I think you could have “cleared Michael of all charges” in a much more direct and simple post that would have been compelling to me and others looking in; it seems like that’s maybe 1⁄5 of what’s going on in the above, and it’s scattered throughout, piecemeal. I’m not sure what you think the other 4⁄5 is doing, or why you wanted it there.
(I mean this straightforwardly—that I am not sure. I don’t mean it in an attacky fashion, like me not being sure implies that there’s no good reason or whatever.)
I believe and appreciate the fact that you were attending to and cared about this, separate from the fact that I unfortunately do not believe you succeeded.
EDIT: The reason I leave this comment is because I sense a sort of … trend toward treating the critique as if it’s missing the point? And while it might be missing Jessica’s point, I do not think that the critique was about trivial matters while not attending to serious ones. I think the critique was very much centered on Stuff I Think Actually Matters.
FWIW, I consider myself to be precisely one of those “middle distance” people. I wasn’t around you during this time at MIRI, I’m not particularly invested in defending MIRI (except to the degree which I believe MIRI is innocent in any given situation and therefore am actually invested in Defending An Innocent Party, which is different), and I have a very mixed-bag relationship with EA and rationality; solid criticisms of the EA/LW/rationality sphere usually find me quite sympathetic. I’m looking in as someone who would have appreciated a good clear post highlighting things worth feeling concern over. I really wished this and its precursor were e.g. tight analogues to the Zoe post, which I similarly appreciated as a middle distancer.
What, if any, are your (major) political motives regarding MIRI/CFAR/similar?
I really liked MIRI/CFAR during 2015-2016 (even though I had lots of criticisms), I think I benefited a lot overall, I think things got bad in 2017 and haven’t recovered. E.g. MIRI has had many fewer good publications since 2017 and for reasons I’ve expressed, I don’t believe their private research is comparably good to their previous public research. (Maybe to some extent I got disillusioned so I’m overestimating how much things changed, I’m not entirely sure how to disentangle.)
As revealed in my posts, I was a “dissident” during 2017 and confusedly/fearfully trying to learn and share critiques, gather people into a splinter group, etc, so there’s somewhat of a legacy of a past conflict affecting the present, although it’s obviously less intense now, especially after I can write about it.
I’ve noticed people trying to “center” everything around MIRI, justifying their actions in terms of “helping MIRI” etc (one LW mod told me and others in 2018 that LessWrong was primarily a recruiting funnel for MIRI, not a rationality promotion website, and someone else who was in the scene 2016-2017 corroborated that this is a common opinion), and I think this is pretty bad since they have no way of checking how useful MIRI’s work is, and there’s a market for lemons (compare EA arguments against donating to even “reputable” organizations like UNICEF). It resembles idol worship and that’s disappointing.
This is corroborated by some other former MIRI employees, e.g. someone who left sometime in the past 2 years who agreed with someone else’s characterization that MIRI was acting against its original mission.
I think lots of individuals at MIRI are intellectually productive and/or high-potential but pretty confused about a lot of things. I don’t currently see a more efficient way to communicate with them than by writing things on the Internet.
I have a long-standing disagreement about AI timelines (I wrote a post saying people are grossly distorting things, which I believe and think is important partially due to the content of my recent posts about my experiences; Anna commented that the post was written in a “triggered” mind state which seems pretty likely given the traumatic events I’ve described). I think lots of people are getting freaked out about the world ending soon and this is wrong and bad for their health. It’s like in Wild Wild Country where the leader becomes increasingly isolated and starts making nearer-term doom predictions while the second-in-command becomes the de-facto social leader (this isn’t an exact analogy and I would be inhibited from making it except that I’m specifically being asked about my political motives, I’m not saying I have a good argument for this).
I still think AI risk is a problem in the long term but I have a broader idea of what “AI alignment research” is, e.g. it includes things that would fall under philosophy/the humanities. I think the problem is really hard and people have to think in inter-disciplinary ways to actually come close to solving it (or to get one of the best achievable partial solutions). I think MIRI is drawing attention to a lot of the difficulties with the problem and that’s good even if I don’t think they can solve it.
Someone I know pointed out that Eliezer’s model might indicate that the AI alignment field has been overall net negative due to it sparking OpenAI and due to MIRI currently having no good plans. If that’s true it seems like a large change in the overall AI safety/x-risk space would be warranted.
My friends and I have been talking with Anna Salamon (head of CFAR) more over this year, she’s been talking about a lot of the problems that have happened historically and how she intends to do different things going forward, and that seems like a good sign but she isn’t past the threshold of willing+able she would need to be to fix the scene herself.
I’m somewhat worried about criticizing these orgs too hard because I want to maintain relations with people in my previous social network, because I don’t actually think they’re especially bad, because my org (mediangroup.org) has previously gotten funding from a re-granting organization whose representative told me that my org is more likely to get funding if I write fewer “accusatory” blog posts (although, I’m not sure if I believe them about this at this time, maybe writing critiques causes people to think I’m more important and fund me more?), because it might spark “retaliation” (which need not be illegal, e.g. maybe people just criticize me a bunch in a way that’s embarrassing, or give me less money). I feel weird criticizing orgs that were as good for my career as they were even though that doesn’t make that much sense from an ethical perspective.
I very much don’t think the central orgs can accomplish their goals if they can’t learn from criticisms. A lot of the time I’m more comfortable in rat-adjacent/postrat-ish/non-rationalist spaces than central rationalist spaces because they are less enamored by the ideology and the central institutions. It’s easier to just attend a party and say lots of weird-but-potentially-revelatory things without getting caught in a bunch of defensiveness related to the history of the scene. One issue with these alternative social settings is that a lot of these people think it’s normal to take ideas less seriously in general so they think e.g. I’m only speaking out about problems because I have a high level of “autism” and it’s too much to expect people to tell the truth when their rent stream depends on them not acknowledging it. I understand how someone could come to this perspective but it seems somewhat of a figure-ground inversion that normalizes parasitic behavior.
I found this a very helpful and useful comment, and resonate with various bits of it (I also think I disagree with a good chunk of it, but a lot of it seems right overall).
I’m curious which parts resonate most with you (I’d ordinarily not ask this because it would seem rude, but I’m in a revealing-political-motives mood and figure the actual amount of pressure is pretty low).
I share the sense that something pretty substantial changed with MIRI in ~2017 and that something important got lost when that happened. I share some of the sense that people’s thinking about timelines is confused, though I do think overall pretty short timelines are justified (though mine are on the longer end of what MIRI people tend to think, though much shorter than yours, IIRC). I think you are saying some important things about the funding landscape, and have been pretty sad about the dynamics here as well, though I think the actual situation is pretty messy and some funders are really quite pro-critique, and some others seem to me to be much more optimizing for something like the brand of the EA-coalition.
I feel like this topic may deserve a top-level post (rather than an N-th level comment here).
EDIT: I specifically meant the “MIRI in ~2017” topic, although I am generally in favor of extracting all other topics from Jessica’s post in a way that would be easier for me to read.
Thanks, this is great (I mean, it clarifies a lot for me).
(This is helpful context, thanks.)