If you’re interested in rerunning this, consider doing it with two tokens that don’t have an obvious ordinal relationship. Ie, in the training set, GPT sees “A” then “B” a billion trillion times, and perhaps has a sense that in many contexts “A” is preferred to “B”.
If you’re interested in rerunning this, consider doing it with two tokens that don’t have an obvious ordinal relationship. Ie, in the training set, GPT sees “A” then “B” a billion trillion times, and perhaps has a sense that in many contexts “A” is preferred to “B”.
Might be a good way to further test this indeed. So maybe something like
green
andelephant
?Yes!