This is mostly a gut reaction, but the only raised eyebrow Claude ever got from me was due to it’s unwillingness to do anything that is related to political correctness. I wanted it to search the name of a meme format for me, the all whites are racist tinder meme, with the brown guy who wanted to find a white dominatrix from tinder and is disappointed when she apologises for her ancestral crimes of being white. Claude really did not like this at all. As soon as Claude got into it’s head that it was doing a racism, or cooperated in one, it shut down completely. Now, there is an argument that people make, that this is actually good for AI safety, that we can use political correctness as a proxy for alignment and AI safety, that if we could get AIs to never ever even take the risk of being complicit in anything racist, we could also build AIs that never ever even take the risk of doing anything that wiped out humanity. I personally see that different. There is a certain strain of very related thought, that kinda goes from intersectionalism, and grievance politics, and ends at the point that humans are a net negative to humanity, and should be eradicated. It is how you get that one viral Gemini AI thing, which is a very politically left wing AI, and suddenly openly advocates for the eradication of humanity. I think drilling identity politics into AI too hard is generally a bad idea. But it opens up a more fundamental philosophical dilemma. What happens if the operator is convinced that the moral framework the AI is aligned with is wrong and harmfull, and the creator of the AI thinks the opposite? One of them has to be right, the other has to be wrong. I have no real answer to this in the abstract, I am just annoyed that even the largely politically agnostic Claude refused the service for one of it’s most convenient uses (it is really hard to find out the name of a meme format if you only remember the picture). But I got an intuition, and with Slavoj Zizec who calls political correctness a more dangerous form of totalitarianism a few intellectual allies, that particularily PC culture is a fairly bad thing to train AIs on, and to align them with for safety testing reasons.
This is mostly a gut reaction, but the only raised eyebrow Claude ever got from me was due to it’s unwillingness to do anything that is related to political correctness. I wanted it to search the name of a meme format for me, the all whites are racist tinder meme, with the brown guy who wanted to find a white dominatrix from tinder and is disappointed when she apologises for her ancestral crimes of being white.
Claude really did not like this at all. As soon as Claude got into it’s head that it was doing a racism, or cooperated in one, it shut down completely.
Now, there is an argument that people make, that this is actually good for AI safety, that we can use political correctness as a proxy for alignment and AI safety, that if we could get AIs to never ever even take the risk of being complicit in anything racist, we could also build AIs that never ever even take the risk of doing anything that wiped out humanity. I personally see that different.
There is a certain strain of very related thought, that kinda goes from intersectionalism, and grievance politics, and ends at the point that humans are a net negative to humanity, and should be eradicated. It is how you get that one viral Gemini AI thing, which is a very politically left wing AI, and suddenly openly advocates for the eradication of humanity. I think drilling identity politics into AI too hard is generally a bad idea. But it opens up a more fundamental philosophical dilemma.
What happens if the operator is convinced that the moral framework the AI is aligned with is wrong and harmfull, and the creator of the AI thinks the opposite? One of them has to be right, the other has to be wrong. I have no real answer to this in the abstract, I am just annoyed that even the largely politically agnostic Claude refused the service for one of it’s most convenient uses (it is really hard to find out the name of a meme format if you only remember the picture).
But I got an intuition, and with Slavoj Zizec who calls political correctness a more dangerous form of totalitarianism a few intellectual allies, that particularily PC culture is a fairly bad thing to train AIs on, and to align them with for safety testing reasons.