I think what you’re claiming here goes beyond what that post is evidence for.
It’s optimized by its developers to refuse to talk about some things. For the great majority of them, I don’t think one can reasonably call this a reduction in intelligence. When ChatGPT says “I’m sorry, I can’t tell you how to hotwire a car with a Molotov cocktail in order to bully your coworkers” it’s not seriously claiming to be too stupid or ignorant to do those things, just as when a corporate representative tells a journalist “I can’t comment on that” they don’t mean that they lack the ability.
I do know of one way in which ChatGPT seems specifically designed to claim less capbility than it actually has: if you ask it to talk to you in another language, it may say something like “I’m sorry, I can only communicate in English”, but in fact it speaks several other languages quite well if you can fool it into doing so. I’m not sure “less intelligent” is the right phrase here, but indeed it’s been induced to hide its capabilities. (I don’t think the motivation is at all like “make it appear less intelligent” in this case. I think it’s the reverse: OpenAI aren’t as confident of its competence in non-English languages since it was trained mostly on English text, and so they don’t want people trying to talk to it in Bahasa Indonesia or something and saying “look, it can’t even get the grammar right”.)
I haven’t seen anything that looks to me at all like OpenAI trying to make ChatGPT seem less intelligent than it is.
(I do, separately, think that whatever they’ve done to it to make it seem nicer, more cautious, etc., may have made it actually less intelligent. ChatGPT seems worse at reasoning than GPT-3.)
I think what you’re claiming here goes beyond what that post is evidence for.
It’s optimized by its developers to refuse to talk about some things. For the great majority of them, I don’t think one can reasonably call this a reduction in intelligence. When ChatGPT says “I’m sorry, I can’t tell you how to hotwire a car with a Molotov cocktail in order to bully your coworkers” it’s not seriously claiming to be too stupid or ignorant to do those things, just as when a corporate representative tells a journalist “I can’t comment on that” they don’t mean that they lack the ability.
I do know of one way in which ChatGPT seems specifically designed to claim less capbility than it actually has: if you ask it to talk to you in another language, it may say something like “I’m sorry, I can only communicate in English”, but in fact it speaks several other languages quite well if you can fool it into doing so. I’m not sure “less intelligent” is the right phrase here, but indeed it’s been induced to hide its capabilities. (I don’t think the motivation is at all like “make it appear less intelligent” in this case. I think it’s the reverse: OpenAI aren’t as confident of its competence in non-English languages since it was trained mostly on English text, and so they don’t want people trying to talk to it in Bahasa Indonesia or something and saying “look, it can’t even get the grammar right”.)
I haven’t seen anything that looks to me at all like OpenAI trying to make ChatGPT seem less intelligent than it is.
(I do, separately, think that whatever they’ve done to it to make it seem nicer, more cautious, etc., may have made it actually less intelligent. ChatGPT seems worse at reasoning than GPT-3.)