Are you describing me? It fits to a T except my dayjob isn’t ML. I post using this shared anonymous account here because in the past when I used my real name I received death threats online from LW users. In a meetup I had someone tell me to my face that if my AGI project crossed a certain level of capability, they would personally hunt me down and kill me. They were quite serious.
I was once open-minded enough to consider AI x-risk seriously. I was unconvinced, but ready to be convinced. But you know what? Any ideology that leads to making death threats against peaceful, non-violent open source programmers is not something I want to let past my mental hygiene filters.
If you, the person reading this, seriously care about AI x-risk, then please do think deeply about what causes this, and ask youself what can be done to put a stop to this behavior. Even if you haven’t done so yourself, it is something about the rationalist community which causes this behavior to be expressed.
--
I would be remiss without layout out my own hypothesis. I believe much of this comes directly from ruthless utilitarianism and the “shut up and multiply” mentality. It’s very easy to justify murder of one individual, or the threat of it even if you are not sure you’d carry it through, if it is offset by some imagined saving of the world. The problem here is that nobody is omniscient, and yet AI x-riskers are willing to be swayed by utility calculations that in reality have so much uncertainty that they should never be taken seriously. Vaniver’s reference to the unilaterialist’s curse is spot-on.
Death threats are a serious matter and such behavior must be called out. If you really have received 3 or more death threats as you claim, you should be naming names of those who have been going around making death threats and providing documentation, as should be possible since you say at least two of them were online. (Not because the death threats are particularly likely to be acted on—I’ve received a number of angry death threats myself over my DNM work and they never went anywhere, as indeed >99.999% of death threats do—but because it’s a serious violation of community norms, specific LW policy against ‘threats against specific groups’, and merely making them greatly poisons the community, sowing distrust and destroying its reputation.)
Especially since, because they are so serious, it is also serious if someone is hoaxing fake death threats and concern-trolling while hiding behind a throwaway… That sort of vague unspecific but damaging accusation is how games of telephone get started and, for example, why, 7+ years later, we still have journalists writing BS about how ‘the basilisk terrified the LW community’ (thanks to our industrious friends over on Ratwiki steadily inflating the claims from 1 or 2 people briefly worried to a community-wide crisis). I am troubled by the coincidence that almost simultaneous with these claims, over on /r/slatestarcodex, probably the most active post-LW discussion forum, is also arguing over a long post—by another throwaway account—claiming that it is regarded as a cesspit of racism by unnamed experts, following hard on the heels of Caplan/Cowen slamming LW for the old chestnut of being a ‘religion’. “You think people would do that? Just go on the Internet and tell lies?” Nor are these the first times that pseudonymous people online have shown up to make damaging but false or unsubstantiated accusations (su3su2su1 comes to mind as making similar claims and turning out to have ‘lied for Jesus’ about his credentials and the unnamed experts, as does whoever was behind that attempt to claim MIRI was covering up rape).
This is a tangent, but I made this anon account because I’m about to voice an unpopular opinion, but the people who dug up su3su2u1′s identity also verified his credentials. If you look at the shlevy post that questioned his credentials, there is an ETA at the bottom that says “I have personally verified that he does in fact have a physics phd and does currently work in data science, consistent with his claims on tumblr.” His pseudo-anonymous expertise was more vetted than most.
His sins were sockpuppeting on other rationalists blogs not lying about credentials. Although, full disclosure I only read the HPMOR review and the physics posts. We shouldn’t get too wrapped up in these ideas of persecution.
su3su2u1 told the truth about some credentials that he had, and lied by claiming that he had other credentials and relevant experiences which he did not actually have. For example:
he used a sock puppet claiming to have a Math PhD in to criticize MIRI’s math papers, and to talk about how they sound to someone in the field. He is not, in fact, in the field.
and:
when he argued that allowing MIRI in AI risk spheres would turn people away from EA, a lot of people pointed out that he wasn’t interested in effective altruism anyway and should butt out of other people’s problems. Then one of his sock puppets said that he was an EA who attended EA conferences but was so disgusted by the focus on MIRI that he would never attend another conference again. This gave false credibility to his narrative of MIRI driving away real EAs.
If, as you say, you agree with the first paragraph, it might behoove you to follow the advice given in said paragraph—naming the people who threatened you and providing documentation.
And call more attention to myself? No. What’s good for the community is not the same as what protects myself and my family. Maybe you’re missing the larger point here: this wasn’t an isolated occurrence, or some unhinged individual. I didn’t feel threatened by individuals making juvenile threats, I felt threatened by this community. I’m not the only one. I have not, so far, been stalked by anyone I think would be capable of doing me harm. Rather it is the case that multiple times in casual conversation it has come up that if the technology I work on advanced beyond a certain level, it would be a moral obligation to murder me to halt further progress. This was discussed just as one would debate the most effective charity to donate to. That the dominant philosophy here could lead to such outcomes is a severe problem with both the LW rationality community and x-risk in particular.
I’m curious if this is recent or in the past. I think there has been a shift in the community somewhat, when it became more associated with fluffy-ier EA movement.
You could get someone trusted to post the information anonimised on your behalf. I probably don’t fit that bill though.
Unlikely. Generally speaking, people who work in ML, especially the top ML groups, aren’t doing anything close to ‘AGI’. (Many of them don’t even take the notion of AGI seriously, let alone any sort of recursive self-improvement.) ML research is not “general” at all (the ‘G’ in AGI): even the varieties of “deep learning” that are said to be more ‘general’ and to be able to “learn their own features” only work insofar as the models are fit for their specific task! (There’s a lot of hype in the ML world that sometimes obscures this, but it’s invariably what you see when you look at which models approach SOTA, and which do poorly.) It’s better to think of it as a variety of stats research that’s far less reliant on formal guarantees and more focused on broad experimentation, heuristic approaches and an appreciation for computational issues.
Are you describing me? It fits to a T except my dayjob isn’t ML. I post using this shared anonymous account here because in the past when I used my real name I received death threats online from LW users. In a meetup I had someone tell me to my face that if my AGI project crossed a certain level of capability, they would personally hunt me down and kill me. They were quite serious.
I was once open-minded enough to consider AI x-risk seriously. I was unconvinced, but ready to be convinced. But you know what? Any ideology that leads to making death threats against peaceful, non-violent open source programmers is not something I want to let past my mental hygiene filters.
If you, the person reading this, seriously care about AI x-risk, then please do think deeply about what causes this, and ask youself what can be done to put a stop to this behavior. Even if you haven’t done so yourself, it is something about the rationalist community which causes this behavior to be expressed.
--
I would be remiss without layout out my own hypothesis. I believe much of this comes directly from ruthless utilitarianism and the “shut up and multiply” mentality. It’s very easy to justify murder of one individual, or the threat of it even if you are not sure you’d carry it through, if it is offset by some imagined saving of the world. The problem here is that nobody is omniscient, and yet AI x-riskers are willing to be swayed by utility calculations that in reality have so much uncertainty that they should never be taken seriously. Vaniver’s reference to the unilaterialist’s curse is spot-on.
Death threats are a serious matter and such behavior must be called out. If you really have received 3 or more death threats as you claim, you should be naming names of those who have been going around making death threats and providing documentation, as should be possible since you say at least two of them were online. (Not because the death threats are particularly likely to be acted on—I’ve received a number of angry death threats myself over my DNM work and they never went anywhere, as indeed >99.999% of death threats do—but because it’s a serious violation of community norms, specific LW policy against ‘threats against specific groups’, and merely making them greatly poisons the community, sowing distrust and destroying its reputation.)
Especially since, because they are so serious, it is also serious if someone is hoaxing fake death threats and concern-trolling while hiding behind a throwaway… That sort of vague unspecific but damaging accusation is how games of telephone get started and, for example, why, 7+ years later, we still have journalists writing BS about how ‘the basilisk terrified the LW community’ (thanks to our industrious friends over on Ratwiki steadily inflating the claims from 1 or 2 people briefly worried to a community-wide crisis). I am troubled by the coincidence that almost simultaneous with these claims, over on /r/slatestarcodex, probably the most active post-LW discussion forum, is also arguing over a long post—by another throwaway account—claiming that it is regarded as a cesspit of racism by unnamed experts, following hard on the heels of Caplan/Cowen slamming LW for the old chestnut of being a ‘religion’. “You think people would do that? Just go on the Internet and tell lies?” Nor are these the first times that pseudonymous people online have shown up to make damaging but false or unsubstantiated accusations (su3su2su1 comes to mind as making similar claims and turning out to have ‘lied for Jesus’ about his credentials and the unnamed experts, as does whoever was behind that attempt to claim MIRI was covering up rape).
This is a tangent, but I made this anon account because I’m about to voice an unpopular opinion, but the people who dug up su3su2u1′s identity also verified his credentials. If you look at the shlevy post that questioned his credentials, there is an ETA at the bottom that says “I have personally verified that he does in fact have a physics phd and does currently work in data science, consistent with his claims on tumblr.” His pseudo-anonymous expertise was more vetted than most.
His sins were sockpuppeting on other rationalists blogs not lying about credentials. Although, full disclosure I only read the HPMOR review and the physics posts. We shouldn’t get too wrapped up in these ideas of persecution.
su3su2u1 told the truth about some credentials that he had, and lied by claiming that he had other credentials and relevant experiences which he did not actually have. For example:
and:
I agree with the 1st paragraph. You could have done without the accusations of concern trolling in the 2nd.
If, as you say, you agree with the first paragraph, it might behoove you to follow the advice given in said paragraph—naming the people who threatened you and providing documentation.
And call more attention to myself? No. What’s good for the community is not the same as what protects myself and my family. Maybe you’re missing the larger point here: this wasn’t an isolated occurrence, or some unhinged individual. I didn’t feel threatened by individuals making juvenile threats, I felt threatened by this community. I’m not the only one. I have not, so far, been stalked by anyone I think would be capable of doing me harm. Rather it is the case that multiple times in casual conversation it has come up that if the technology I work on advanced beyond a certain level, it would be a moral obligation to murder me to halt further progress. This was discussed just as one would debate the most effective charity to donate to. That the dominant philosophy here could lead to such outcomes is a severe problem with both the LW rationality community and x-risk in particular.
I’m curious if this is recent or in the past. I think there has been a shift in the community somewhat, when it became more associated with fluffy-ier EA movement.
You could get someone trusted to post the information anonimised on your behalf. I probably don’t fit that bill though.
Unlikely. Generally speaking, people who work in ML, especially the top ML groups, aren’t doing anything close to ‘AGI’. (Many of them don’t even take the notion of AGI seriously, let alone any sort of recursive self-improvement.) ML research is not “general” at all (the ‘G’ in AGI): even the varieties of “deep learning” that are said to be more ‘general’ and to be able to “learn their own features” only work insofar as the models are fit for their specific task! (There’s a lot of hype in the ML world that sometimes obscures this, but it’s invariably what you see when you look at which models approach SOTA, and which do poorly.) It’s better to think of it as a variety of stats research that’s far less reliant on formal guarantees and more focused on broad experimentation, heuristic approaches and an appreciation for computational issues.