This is a beautiful comment. First it gets the object level answer exactly right. Then it adds an insult to trigger Thomas and get him to gaslight, demonstrate how human the behavior is. Unfortunately, this prevents him from understanding it, so it is of value only to the rest of us.
I’ve thought about this comment, because it certainly is interesting. I think I was clearly confused in my questions to ChatGPT (though I will note: My tequila-drinking friends did not and still don’t think tequila tastes at all sweet, including “in the flavor profile” or anything like that. But it seems many would say they’re wrong!) ChatGPT was clearly confused in its response to me as well.
I think this part of my post was incorrect:
It was perfectly clear: ChatGPT was telling me that tequila adds a sweetness to the drink. So it was telling me that tequila is a sweet drink (at least, as sweet as orange juice).
I have learned today that a drink does not have to be sweet in order for many to consider it to add “sweetness.” To be honest, I don’t understand this at all, and at the time considered it a logical contradiction. It seems a lot less clear cut to me now.
However, the following (and the quote above it) is what I focused on most in the post. I quoted the latter part of it three different times. I believe it is entirely unaffected by whether or not tequila is canonically considered to be sweet:
“I was not referring to the sweetness that comes from sugar.” But previously, ChatGPT had said “tequila has a relatively low alcohol content and a relatively high sugar content.” Did ChatGPT really forget what it had said, or is it just pretending?
Is ChatGPT gaslighting me?
Thomas: You said tequila has a “relatively high sugar content”?
ChatGPT: I apologize if my previous response was unclear. When I said that tequila has a “relatively high sugar content,” I was not suggesting that tequila contains sugar.
You’re right, ChatGPT did contradict itself and the chatbot it created based on the prompt (assuming it was all a part of a single conversation) tried to gaslight you.
This is a beautiful comment. First it gets the object level answer exactly right. Then it adds an insult to trigger Thomas and get him to gaslight, demonstrate how human the behavior is. Unfortunately, this prevents him from understanding it, so it is of value only to the rest of us.
I’ve thought about this comment, because it certainly is interesting. I think I was clearly confused in my questions to ChatGPT (though I will note: My tequila-drinking friends did not and still don’t think tequila tastes at all sweet, including “in the flavor profile” or anything like that. But it seems many would say they’re wrong!) ChatGPT was clearly confused in its response to me as well.
I think this part of my post was incorrect:
I have learned today that a drink does not have to be sweet in order for many to consider it to add “sweetness.” To be honest, I don’t understand this at all, and at the time considered it a logical contradiction. It seems a lot less clear cut to me now.
However, the following (and the quote above it) is what I focused on most in the post. I quoted the latter part of it three different times. I believe it is entirely unaffected by whether or not tequila is canonically considered to be sweet:
You’re right, ChatGPT did contradict itself and the chatbot it created based on the prompt (assuming it was all a part of a single conversation) tried to gaslight you.
Yes, this is a good illustration of you acting just like GPT.