I agree, and would like to proffer as a concrete example the deliberate conflation of two kinds of AI safety: one against AI saying the word “nigger” or creating generating nonconsensual nudity/pornography, and the more traditional one against AI turning everyone into paperclips. The idea is to sow enough confusion that they can then use the popularity of the former as a means of propping up the latter. I consider this behavior antithetical to truth-seeking.
Hey, uh, I don’t wanna overly police people’s language, but this is the second time in a week you’ve used the n-word specifically as your example here, and it seems like, at best, an unnecessarily distracting example.
No, I maintain this is THE central example of the goals of this new (and overwhelmingly dominant) AI safety, enforcing the single taboo in present-day America most akin to blasphemy, and precisely as victimless. If they were the Islamic analogue, I’d use the example of the caricatures of Mohammed every time, “distracting” as it may be to those of the faith: using any other is disingenuous, and contrary to my deeply-held value of speaking the truth as best I see it.
LessWrong has a pretty established norm of not using unnecessarily political examples. (See Politics is the Mind-Killer). I don’t object to you writing up a top level post arguing for the point you’re trying to make here. But I do object to you injecting your pet topic into various other comment threads in particularly distracting ways (especially ones that are only tangentially about AI, let alone about your particular concern about AI and culture/politics/etc).
When you did it last week, it didn’t seem like something it felt right for the mods to intervene on heavy-handedly (some of us downvoted as individuals). But, it sounds like you’re going out of your way to use an inflammatory example repeatedly. I am now concretely asking you as a moderator to not do that.
I’m locking this thread since it’s pretty offtopic. You can go discuss it more at the meta-level over in the Open Thread, if you want to argue about the overall LessWrong moderation policy.
I agree, and would like to proffer as a concrete example the deliberate conflation of two kinds of AI safety: one against AI saying the word “nigger” or creating generating nonconsensual nudity/pornography, and the more traditional one against AI turning everyone into paperclips. The idea is to sow enough confusion that they can then use the popularity of the former as a means of propping up the latter. I consider this behavior antithetical to truth-seeking.
Hey, uh, I don’t wanna overly police people’s language, but this is the second time in a week you’ve used the n-word specifically as your example here, and it seems like, at best, an unnecessarily distracting example.
No, I maintain this is THE central example of the goals of this new (and overwhelmingly dominant) AI safety, enforcing the single taboo in present-day America most akin to blasphemy, and precisely as victimless. If they were the Islamic analogue, I’d use the example of the caricatures of Mohammed every time, “distracting” as it may be to those of the faith: using any other is disingenuous, and contrary to my deeply-held value of speaking the truth as best I see it.
LessWrong has a pretty established norm of not using unnecessarily political examples. (See Politics is the Mind-Killer). I don’t object to you writing up a top level post arguing for the point you’re trying to make here. But I do object to you injecting your pet topic into various other comment threads in particularly distracting ways (especially ones that are only tangentially about AI, let alone about your particular concern about AI and culture/politics/etc).
When you did it last week, it didn’t seem like something it felt right for the mods to intervene on heavy-handedly (some of us downvoted as individuals). But, it sounds like you’re going out of your way to use an inflammatory example repeatedly. I am now concretely asking you as a moderator to not do that.
I’m locking this thread since it’s pretty offtopic. You can go discuss it more at the meta-level over in the Open Thread, if you want to argue about the overall LessWrong moderation policy.