If, as you say, you agree with the first paragraph, it might behoove you to follow the advice given in said paragraph—naming the people who threatened you and providing documentation.
And call more attention to myself? No. What’s good for the community is not the same as what protects myself and my family. Maybe you’re missing the larger point here: this wasn’t an isolated occurrence, or some unhinged individual. I didn’t feel threatened by individuals making juvenile threats, I felt threatened by this community. I’m not the only one. I have not, so far, been stalked by anyone I think would be capable of doing me harm. Rather it is the case that multiple times in casual conversation it has come up that if the technology I work on advanced beyond a certain level, it would be a moral obligation to murder me to halt further progress. This was discussed just as one would debate the most effective charity to donate to. That the dominant philosophy here could lead to such outcomes is a severe problem with both the LW rationality community and x-risk in particular.
I’m curious if this is recent or in the past. I think there has been a shift in the community somewhat, when it became more associated with fluffy-ier EA movement.
You could get someone trusted to post the information anonimised on your behalf. I probably don’t fit that bill though.
I agree with the 1st paragraph. You could have done without the accusations of concern trolling in the 2nd.
If, as you say, you agree with the first paragraph, it might behoove you to follow the advice given in said paragraph—naming the people who threatened you and providing documentation.
And call more attention to myself? No. What’s good for the community is not the same as what protects myself and my family. Maybe you’re missing the larger point here: this wasn’t an isolated occurrence, or some unhinged individual. I didn’t feel threatened by individuals making juvenile threats, I felt threatened by this community. I’m not the only one. I have not, so far, been stalked by anyone I think would be capable of doing me harm. Rather it is the case that multiple times in casual conversation it has come up that if the technology I work on advanced beyond a certain level, it would be a moral obligation to murder me to halt further progress. This was discussed just as one would debate the most effective charity to donate to. That the dominant philosophy here could lead to such outcomes is a severe problem with both the LW rationality community and x-risk in particular.
I’m curious if this is recent or in the past. I think there has been a shift in the community somewhat, when it became more associated with fluffy-ier EA movement.
You could get someone trusted to post the information anonimised on your behalf. I probably don’t fit that bill though.