Did you try the beet margarita with orange juice? Was it good?
To be honest, this exchange seems completely normal for descriptions of alcohol. Tequila is canonically described as sweet. You are completely correct that when people say “tequila is sweet” they are not trying to compared it to super stimulants like orange juice and coke. GPT might not understand this fact. GPT knows that the canonical flavor profile for tequila includes “sweet”, and your friend knows that it’d be weird to call tequila a sweet drink.
I think the gaslighting angle is rather overblown. GPT knows that tequila is sweet. GPT knows that most the sugar in tequila has been converted to alcohol. GPT may not know how to reconcile these facts.
Also, I get weird vibes from this post as generally performative about sobriety. You don’t know the flavor profiles of alcohol, and the AI isn’t communicating well the flavor profiles of alcohol. Why are you writing about the AIs lack of knowledge about the difference between tequila’s sweetness and orange juice’s sweetness? You seem like an ill informed person on the topic, and like you have no intention of becoming better informed. From where I stand, it seems like you understand alcohol taste less than GPT.
I’m going to address your last paragraph first, because I think it’s important for me to respond to, not just for you and me but for others who may be reading this.
When I originally wrote this post, it was because I had asked ChatGPT a genuine question about a drink I wanted to make. I don’t drink alcohol, and I never have. I’ve found that even mentioning this fact sometimes produces responses like yours, and it’s not uncommon for people to think I am mentioning it as some kind of performative virtue signal. People choose not to drink for all sorts of reasons, and maybe some are being performative about it, but that’s a hurtful assumption to make about anyone who makes that choice and dares to admit it in a public forum. This is exactly why I am often hesitant to mention this fact about myself, but in the case of this post, there really was no other choice (aside from just not posting this at all, which I would really disprefer). I’ve generally found the LW community and younger generations to be especially good at interpreting a choice not to drink for what it usually is: a personal choice, not a judgment or a signal or some kind of performative act. However, your comment initially angered and then saddened me, because it greets my choice through a lens of suspicion. That’s generally a fine lens through which to look at the world, but I think in this context, it’s a harmful one. I hope you will consider thinking a little more compassionately in the future with respect to this issue.
To answer your object-level critiques:
The problem is that it clearly contradicts itself several times, rather than admitting a contradiction it doesn’t know how to reconcile. There is no sugar in tequila. Tequila may be described as sweet (nobody I talked to described it as such, but some people on the internet do) for non-sugar reasons. In fact, I’m sure ChatGPT knows way more about tequila than I do!
It is not that it “may not know” how to reconcile those facts. It is that it doesn’t know, makes something up, and pretends it makes sense.
A situation where somebody interacting with the chatbot doesn’t know much about the subject area is exactly the kind of situation we need to be worried about with these models. I’m entirely unconvinced that the fact that some people describe tequila as sweet says much at all about this post. That’s because the point of the post was rather that ChatGPT claimed tequila has high sugar content, then claimed that actually the sweetness is due to something else, and it never really meant that tequila has any sugar. That is the problem, and I don’t think my description of it is overblown.
This is actually pretty difficult because it can encourage very bad behaviors. If you train for this it will learn the optimal strategy is to make subtle errors because if they are subtle than they might get rewarded (wrongly) anyways and if you notice the issue and call it out it will still be rewarded for admitting its errors.
This type of training I think could still be useful but as a separate type of research into human readability of its (similar) models thought processes. If you are asking it to explain its own errors that could prove useful but as the main type of model that they are training it for it would be counterproductive (its going to go to a very not ideal local minima)
I am sorry for insulting you. My experience in the rationality community is that many people choose abstinence from alcohol, which I can respect, but I forgot that likely in many social circles that choice leads to feelings of alienation. While I thought you were signaling in-group allegiance, I can see that you might not have that connection. I will attempt to model better in the future, since this seems generalizable.
I’m still interested in whether the beet margarita with OJ was good~
I appreciate this. I don’t even consider myself part of the rationality community, though I’m adjacent. My reasons for not drinking have nothing to do with the community and existed before I knew what it was. I actually get the sense this is the case for a number of people in the community (more of a correlation or common cause rather than caused by the community itself). But of course I can’t speak for all.
I will be trying it on Sunday. We will see how it is.
This is a beautiful comment. First it gets the object level answer exactly right. Then it adds an insult to trigger Thomas and get him to gaslight, demonstrate how human the behavior is. Unfortunately, this prevents him from understanding it, so it is of value only to the rest of us.
I’ve thought about this comment, because it certainly is interesting. I think I was clearly confused in my questions to ChatGPT (though I will note: My tequila-drinking friends did not and still don’t think tequila tastes at all sweet, including “in the flavor profile” or anything like that. But it seems many would say they’re wrong!) ChatGPT was clearly confused in its response to me as well.
I think this part of my post was incorrect:
It was perfectly clear: ChatGPT was telling me that tequila adds a sweetness to the drink. So it was telling me that tequila is a sweet drink (at least, as sweet as orange juice).
I have learned today that a drink does not have to be sweet in order for many to consider it to add “sweetness.” To be honest, I don’t understand this at all, and at the time considered it a logical contradiction. It seems a lot less clear cut to me now.
However, the following (and the quote above it) is what I focused on most in the post. I quoted the latter part of it three different times. I believe it is entirely unaffected by whether or not tequila is canonically considered to be sweet:
“I was not referring to the sweetness that comes from sugar.” But previously, ChatGPT had said “tequila has a relatively low alcohol content and a relatively high sugar content.” Did ChatGPT really forget what it had said, or is it just pretending?
Is ChatGPT gaslighting me?
Thomas: You said tequila has a “relatively high sugar content”?
ChatGPT: I apologize if my previous response was unclear. When I said that tequila has a “relatively high sugar content,” I was not suggesting that tequila contains sugar.
You’re right, ChatGPT did contradict itself and the chatbot it created based on the prompt (assuming it was all a part of a single conversation) tried to gaslight you.
Did you try the beet margarita with orange juice? Was it good?
To be honest, this exchange seems completely normal for descriptions of alcohol. Tequila is canonically described as sweet. You are completely correct that when people say “tequila is sweet” they are not trying to compared it to super stimulants like orange juice and coke. GPT might not understand this fact. GPT knows that the canonical flavor profile for tequila includes “sweet”, and your friend knows that it’d be weird to call tequila a sweet drink.
I think the gaslighting angle is rather overblown. GPT knows that tequila is sweet. GPT knows that most the sugar in tequila has been converted to alcohol. GPT may not know how to reconcile these facts.
Also, I get weird vibes from this post as generally performative about sobriety. You don’t know the flavor profiles of alcohol, and the AI isn’t communicating well the flavor profiles of alcohol. Why are you writing about the AIs lack of knowledge about the difference between tequila’s sweetness and orange juice’s sweetness? You seem like an ill informed person on the topic, and like you have no intention of becoming better informed. From where I stand, it seems like you understand alcohol taste less than GPT.
I’m going to address your last paragraph first, because I think it’s important for me to respond to, not just for you and me but for others who may be reading this.
When I originally wrote this post, it was because I had asked ChatGPT a genuine question about a drink I wanted to make. I don’t drink alcohol, and I never have. I’ve found that even mentioning this fact sometimes produces responses like yours, and it’s not uncommon for people to think I am mentioning it as some kind of performative virtue signal. People choose not to drink for all sorts of reasons, and maybe some are being performative about it, but that’s a hurtful assumption to make about anyone who makes that choice and dares to admit it in a public forum. This is exactly why I am often hesitant to mention this fact about myself, but in the case of this post, there really was no other choice (aside from just not posting this at all, which I would really disprefer). I’ve generally found the LW community and younger generations to be especially good at interpreting a choice not to drink for what it usually is: a personal choice, not a judgment or a signal or some kind of performative act. However, your comment initially angered and then saddened me, because it greets my choice through a lens of suspicion. That’s generally a fine lens through which to look at the world, but I think in this context, it’s a harmful one. I hope you will consider thinking a little more compassionately in the future with respect to this issue.
To answer your object-level critiques:
The problem is that it clearly contradicts itself several times, rather than admitting a contradiction it doesn’t know how to reconcile. There is no sugar in tequila. Tequila may be described as sweet (nobody I talked to described it as such, but some people on the internet do) for non-sugar reasons. In fact, I’m sure ChatGPT knows way more about tequila than I do!
It is not that it “may not know” how to reconcile those facts. It is that it doesn’t know, makes something up, and pretends it makes sense.
A situation where somebody interacting with the chatbot doesn’t know much about the subject area is exactly the kind of situation we need to be worried about with these models. I’m entirely unconvinced that the fact that some people describe tequila as sweet says much at all about this post. That’s because the point of the post was rather that ChatGPT claimed tequila has high sugar content, then claimed that actually the sweetness is due to something else, and it never really meant that tequila has any sugar. That is the problem, and I don’t think my description of it is overblown.
OpenAI should likely explicitly train ChatGPT to be able to admit it’s errors.
It should! I mentioned that probable future outcome in my original post.
This is actually pretty difficult because it can encourage very bad behaviors. If you train for this it will learn the optimal strategy is to make subtle errors because if they are subtle than they might get rewarded (wrongly) anyways and if you notice the issue and call it out it will still be rewarded for admitting its errors.
This type of training I think could still be useful but as a separate type of research into human readability of its (similar) models thought processes. If you are asking it to explain its own errors that could prove useful but as the main type of model that they are training it for it would be counterproductive (its going to go to a very not ideal local minima)
I am sorry for insulting you. My experience in the rationality community is that many people choose abstinence from alcohol, which I can respect, but I forgot that likely in many social circles that choice leads to feelings of alienation. While I thought you were signaling in-group allegiance, I can see that you might not have that connection. I will attempt to model better in the future, since this seems generalizable.
I’m still interested in whether the beet margarita with OJ was good~
I appreciate this. I don’t even consider myself part of the rationality community, though I’m adjacent. My reasons for not drinking have nothing to do with the community and existed before I knew what it was. I actually get the sense this is the case for a number of people in the community (more of a correlation or common cause rather than caused by the community itself). But of course I can’t speak for all.
I will be trying it on Sunday. We will see how it is.
This is a beautiful comment. First it gets the object level answer exactly right. Then it adds an insult to trigger Thomas and get him to gaslight, demonstrate how human the behavior is. Unfortunately, this prevents him from understanding it, so it is of value only to the rest of us.
I’ve thought about this comment, because it certainly is interesting. I think I was clearly confused in my questions to ChatGPT (though I will note: My tequila-drinking friends did not and still don’t think tequila tastes at all sweet, including “in the flavor profile” or anything like that. But it seems many would say they’re wrong!) ChatGPT was clearly confused in its response to me as well.
I think this part of my post was incorrect:
I have learned today that a drink does not have to be sweet in order for many to consider it to add “sweetness.” To be honest, I don’t understand this at all, and at the time considered it a logical contradiction. It seems a lot less clear cut to me now.
However, the following (and the quote above it) is what I focused on most in the post. I quoted the latter part of it three different times. I believe it is entirely unaffected by whether or not tequila is canonically considered to be sweet:
You’re right, ChatGPT did contradict itself and the chatbot it created based on the prompt (assuming it was all a part of a single conversation) tried to gaslight you.
Yes, this is a good illustration of you acting just like GPT.