I have discussed the ChatGPT responses in some depth with a friend and shed some light on the behavior:
ChatGPT does know that Tequila is associated with sugar—via the inulin in the Tequila plant (it does bring this up in the dialog). That the sugar is completely gone via distillation is a complex logical inference that it might come up with via step-by-step reasoning but that it may not have seen in text (or memorized).
Taste is affected by many things. While it is logical in a mechanistic sense that sweetness depends on sugar being present, that’s not all there is about taste. Ingredients might alter taste perception, e.g., flavor enhancers or think miracle berries. Sweetness might also result from interactions between the ingredients, like freeing sugar from other ingredients.
There are probably a lot of texts out there where people claim that stuff X has property Y that it doesn’t, in fact, have—but ChatGPT has no way to figure this out.
I’m not saying that this is the case with ChatGPT here. I’m saying the answer is more complicated than “Tequila has no sugar and thus can’t make things sweet, and ChatGPT is inconsistent about it.”
Part of the answer is, again, that ChatGPT can give an excellent impression of someone who knows a lot (like the detail about inulin) and seems to be able to reason but is not actually doing this on top of a world model. It may seem like it has a systematic understanding of what sweetness is, or taste, but it only draws on text. It is amazing what it does, but its answers do not result from reasoning thru a world model but from what other people have written after they used their world model. Maybe future GPTs will get there, but right now, you have to take each answer it gives as a combination of existing texts.
For me one of the biggest surprises about current generative AI research is that it yields artificial pseudo-intellectuals: programs that, given sufficient examples to copy, can do a plausible imitation of talking about something they understand.
ADDED: And how much people are fooled by this, i.e., seem to assume that reasoning—of misdirection is going on that is not.
I have discussed the ChatGPT responses in some depth with a friend and shed some light on the behavior:
ChatGPT does know that Tequila is associated with sugar—via the inulin in the Tequila plant (it does bring this up in the dialog). That the sugar is completely gone via distillation is a complex logical inference that it might come up with via step-by-step reasoning but that it may not have seen in text (or memorized).
Taste is affected by many things. While it is logical in a mechanistic sense that sweetness depends on sugar being present, that’s not all there is about taste. Ingredients might alter taste perception, e.g., flavor enhancers or think miracle berries. Sweetness might also result from interactions between the ingredients, like freeing sugar from other ingredients.
There are probably a lot of texts out there where people claim that stuff X has property Y that it doesn’t, in fact, have—but ChatGPT has no way to figure this out.
I’m not saying that this is the case with ChatGPT here. I’m saying the answer is more complicated than “Tequila has no sugar and thus can’t make things sweet, and ChatGPT is inconsistent about it.”
Part of the answer is, again, that ChatGPT can give an excellent impression of someone who knows a lot (like the detail about inulin) and seems to be able to reason but is not actually doing this on top of a world model. It may seem like it has a systematic understanding of what sweetness is, or taste, but it only draws on text. It is amazing what it does, but its answers do not result from reasoning thru a world model but from what other people have written after they used their world model. Maybe future GPTs will get there, but right now, you have to take each answer it gives as a combination of existing texts.
Reminding again of Paul Graham on Twitter:
ADDED: And how much people are fooled by this, i.e., seem to assume that reasoning—of misdirection is going on that is not.