If base-10 and base-8 arithmetic were equally common in the corpus then I don’t think it could do arithmetic very well either, though again, maybe it can distinguish them from context. But if it doesn’t know the context, it would just guess randomly.
If we were in this world, then humans would be in the same spot. If there’s no context to distinguish between the two types of arithmetic, they’d have to choose randomly or rely on some outside knowledge (which could theoretically be learned from the internet). Similarly, if we have two variants of chess that are the same until the end game, humans would have to predecide what version they’re playing.
Humans certainly aren’t perfectly repeatable either—if you ask a person a question, the manner in which they respond would probably be different from if you asked them the same question the next day.
Despite that, we have a lot more knowledge about the way the world is structured than even GPT-3 does, so none of these are issues.
It’s not quite the same, because if you’re confused and you notice you’re confused, you can ask. “Is this in American or European date format?” For GPT-3 to do the same, you might need to give it some specific examples of resolving ambiguity this way, and it might only do so when imitating certain styles.
It doesn’t seem as good as a more built-in preference for noticing and wanting to resolve inconsistency? Choosing based on context is built in using attention, and choosing randomly is built in as part of the text generator.
It’s also worth noticing that the GPT-3 world is the corpus, and a web corpus is a inconsistent place.
It’s not quite the same, because if you’re confused and you notice you’re confused, you can ask.
You can if you do, but most people never notice and those who notice some confusion are still blissfully ignorant of the rest of their self-contradicting beliefs. And by most people I mean you, me and everyone else. In fact, if someone pointed out a contradictory belief in something we hold dear, we would vehemently deny the contradiction and rationalize it to no end. And yet we consider ourselves believing something. If anything, GPT-3′s beliefs are more belief-like than those of humans.
Yes, sometimes we don’t notice. We miss a lot. But there are also ordinary clarifications like “did I hear you correctly” and “what did you mean by that?” Noticing that you didn’t understand something isn’t rare. If we didn’t notice when something seems absurd, jokes wouldn’t work.
This issue seems to be relevant for humans too
If we were in this world, then humans would be in the same spot. If there’s no context to distinguish between the two types of arithmetic, they’d have to choose randomly or rely on some outside knowledge (which could theoretically be learned from the internet). Similarly, if we have two variants of chess that are the same until the end game, humans would have to predecide what version they’re playing.
Humans certainly aren’t perfectly repeatable either—if you ask a person a question, the manner in which they respond would probably be different from if you asked them the same question the next day.
Despite that, we have a lot more knowledge about the way the world is structured than even GPT-3 does, so none of these are issues.
It’s not quite the same, because if you’re confused and you notice you’re confused, you can ask. “Is this in American or European date format?” For GPT-3 to do the same, you might need to give it some specific examples of resolving ambiguity this way, and it might only do so when imitating certain styles.
It doesn’t seem as good as a more built-in preference for noticing and wanting to resolve inconsistency? Choosing based on context is built in using attention, and choosing randomly is built in as part of the text generator.
It’s also worth noticing that the GPT-3 world is the corpus, and a web corpus is a inconsistent place.
You can if you do, but most people never notice and those who notice some confusion are still blissfully ignorant of the rest of their self-contradicting beliefs. And by most people I mean you, me and everyone else. In fact, if someone pointed out a contradictory belief in something we hold dear, we would vehemently deny the contradiction and rationalize it to no end. And yet we consider ourselves believing something. If anything, GPT-3′s beliefs are more belief-like than those of humans.
Yes, sometimes we don’t notice. We miss a lot. But there are also ordinary clarifications like “did I hear you correctly” and “what did you mean by that?” Noticing that you didn’t understand something isn’t rare. If we didn’t notice when something seems absurd, jokes wouldn’t work.